By: Ricardo B (ricardo.b.delete@this.xxxxxx.xx), May 13, 2013 2:40 am
Room: Moderated Discussions
EduardoS (no.delete@this.spam.com) on May 12, 2013 7:21 pm wrote:
> Ricardo B (ricardo.b.delete@this.xxxxx.xx) on May 12, 2013 5:48 pm wrote:
> > Lossless compression (WinRAR, etc) in general sees big improvements.
> > They're trival to parallelize effectively to any number of threads and they're very low on ILP.
>
> Your definition of "trivial" may be a bit to loose...
No, it's not.
I may be wrong about the true nature of some of these applications, but I don't think so either.
AFAIK, all of these algorithms are block based: data to be compressed is divided into blocks, with sizes in hundred of kB to a few MB, and these blocks are compressed ndependently.
As long as you have enough memory, the most efficient way to multi-thread them is also the simplest: have different threads work on diffent blocks and make sure each block is big enough to ensure the threading synchronization overhead is neglible.
> Ricardo B (ricardo.b.delete@this.xxxxx.xx) on May 12, 2013 5:48 pm wrote:
> > Lossless compression (WinRAR, etc) in general sees big improvements.
> > They're trival to parallelize effectively to any number of threads and they're very low on ILP.
>
> Your definition of "trivial" may be a bit to loose...
No, it's not.
I may be wrong about the true nature of some of these applications, but I don't think so either.
AFAIK, all of these algorithms are block based: data to be compressed is divided into blocks, with sizes in hundred of kB to a few MB, and these blocks are compressed ndependently.
As long as you have enough memory, the most efficient way to multi-thread them is also the simplest: have different threads work on diffent blocks and make sure each block is big enough to ensure the threading synchronization overhead is neglible.