By: Ricardo B (ricardo.b.delete@this.xxxxx.xx), May 14, 2013 5:06 am
Room: Moderated Discussions
RichardC (tich.delete@this.pobox.com) on May 14, 2013 3:54 am wrote:
> Maynard Handley (name99.delete@this.name99.org) on May 13, 2013 6:52 pm wrote:
>
> > SMT) help with this to a small extent. They do help when I want certain tasks to run faster (the
> > usual video encode, the slightly less usual large Mathematica jobs, the occasional situation where
>
> The specialized video transcode hardware handles that faster than threaded software.
Yes it can.
But lossy encoders are always a work in progress and the constantly evolving software encoders provide a better quality/file size ratio, less artifacts than fixed function hardware encoders.
Using GPU for enconders, of course, gives almost the best of both worlds.
But even then, there are some stages of the encoding which is best implemented in the CPU and are SMT friendly.
> Well, it's the same as the original argument for RISC: a feature which greatly
> speeds up an infrequent operation, but slightly slows down frequent operations,
> is probably a bad idea.
That was not the original argument for RISC.
> On the wider point, second-guessing Intel's engineers is a big part of what
> happens on this forum. On this particular topic, it's clear that designing
> the same CPU core for use with the very different server vs desktop/laptop
> workloads must involve some compromises: it can't be optimal for both. It can be
> (and is) very good for both.
And Intel clearly favors the desktop/laptop's need for single thread performance.
If you want to look at something optimized for threaded workloads and servers at the expense of single thread, look at AMD's CPUs, not Intel.
Removing SMT from Intel CPUs would improve single thread performance by 1-2% or less, at the expense of worsening multi-thread peformance by ~25% on average.
And to whatever extent normal people run CPU bound workloads, a significant part of them are multi-threaded.
> Maynard Handley (name99.delete@this.name99.org) on May 13, 2013 6:52 pm wrote:
>
> > SMT) help with this to a small extent. They do help when I want certain tasks to run faster (the
> > usual video encode, the slightly less usual large Mathematica jobs, the occasional situation where
>
> The specialized video transcode hardware handles that faster than threaded software.
Yes it can.
But lossy encoders are always a work in progress and the constantly evolving software encoders provide a better quality/file size ratio, less artifacts than fixed function hardware encoders.
Using GPU for enconders, of course, gives almost the best of both worlds.
But even then, there are some stages of the encoding which is best implemented in the CPU and are SMT friendly.
> Well, it's the same as the original argument for RISC: a feature which greatly
> speeds up an infrequent operation, but slightly slows down frequent operations,
> is probably a bad idea.
That was not the original argument for RISC.
> On the wider point, second-guessing Intel's engineers is a big part of what
> happens on this forum. On this particular topic, it's clear that designing
> the same CPU core for use with the very different server vs desktop/laptop
> workloads must involve some compromises: it can't be optimal for both. It can be
> (and is) very good for both.
And Intel clearly favors the desktop/laptop's need for single thread performance.
If you want to look at something optimized for threaded workloads and servers at the expense of single thread, look at AMD's CPUs, not Intel.
Removing SMT from Intel CPUs would improve single thread performance by 1-2% or less, at the expense of worsening multi-thread peformance by ~25% on average.
And to whatever extent normal people run CPU bound workloads, a significant part of them are multi-threaded.