By: fewwef (fewwef.delete@this.yahoo.com), January 5, 2015 8:16 pm
Room: Moderated Discussions
Linus Torvalds (torvalds.delete@this.linux-foundation.org) on December 8, 2014 1:34 pm wrote:
> Jouni Osmala (josmala.delete@this.cc.hut.fi) on December 8, 2014 1:10 pm wrote:
> >
> > I'm assuming that 90+% of programs already run fast enough and they don't matter for this.
> > Its all about asking question in what use current computers are too slow , and can you parallerize
> > that or are those cases already parallel. And I'm assuming you can parallerize atleast 10%
> > of those times where user waits CPU for long enough to actually notice it.
>
> What's the advantage?
>
> You won't get scaling for much longer, and current trends are actually for lower power anyway. So what's the
> upside of pushing the whole parallelism snake-oil? We know that we need fairly complex OoO CPU's anyway, because
> people want reasonable performance and it turns out OoO is actually more efficient than slow in-order.
>
> The whole "let's parallelize" thing is a huge waste of everybody's time. There's this huge
> body of "knowledge" that parallel is somehow more efficient, and that whole huge body is pure
> and utter garbage. Big caches are efficient. Parallel stupid small cores without caches are
> horrible unless you have a very specific load that is hugely regular (ie graphics).
>
> Nobody is ever going to go backwards from where we are today. Those complex OoO cores aren't going
> away. Scaling isn't going to continue forever, and people want mobility, so the crazies talking about
> scaling to hundreds of cores are just that - crazy. Why give them an ounce of credibility?
>
> Where the hell do you envision that those magical parallel algorithms would be used?
>
> The only place where parallelism matters is in graphics or on the server side,
> where we already largely have it. Pushing it anywhere else is just pointless.
>
> So give up on parallelism already. It's not going to happen. End users are fine with roughly
> on the order of four cores, and you can't fit any more anyway without using too much energy
> to be practical in that space. And nobody sane would make the cores smaller and weaker in order
> to fit more of them - the only reason to make them smaller and weaker is because you want to
> go even further down in power use, so you'd still not have lots of those weak cores.
>
> So the whole argument that people should parallelise their code is fundamentally flawed.
> It rests on incorrect assumptions. It's a fad that has been going on too long.
>
> Parallel code makes sense in the few cases I mentioned, where we already largely have
> it covered, because in the server space, people have been parallel for a long time.
>
> It does not necessarily make sense elsewhere. Even in completely new areas that we don't
> do today because you cant' afford it. If you want to do low-power ubiquotous computer vision
> etc, I can pretty much guarantee that you're not going to do it with code on a GP CPU. You're
> likely not even going to do it on a GPU because even that is too expensive (power wise),
> but with specialized hardware, probably based on some neural network model.
>
> Give it up. The whole "parallel computing is the future" is a bunch of crock.
>
> Linus
FUCK
> Jouni Osmala (josmala.delete@this.cc.hut.fi) on December 8, 2014 1:10 pm wrote:
> >
> > I'm assuming that 90+% of programs already run fast enough and they don't matter for this.
> > Its all about asking question in what use current computers are too slow , and can you parallerize
> > that or are those cases already parallel. And I'm assuming you can parallerize atleast 10%
> > of those times where user waits CPU for long enough to actually notice it.
>
> What's the advantage?
>
> You won't get scaling for much longer, and current trends are actually for lower power anyway. So what's the
> upside of pushing the whole parallelism snake-oil? We know that we need fairly complex OoO CPU's anyway, because
> people want reasonable performance and it turns out OoO is actually more efficient than slow in-order.
>
> The whole "let's parallelize" thing is a huge waste of everybody's time. There's this huge
> body of "knowledge" that parallel is somehow more efficient, and that whole huge body is pure
> and utter garbage. Big caches are efficient. Parallel stupid small cores without caches are
> horrible unless you have a very specific load that is hugely regular (ie graphics).
>
> Nobody is ever going to go backwards from where we are today. Those complex OoO cores aren't going
> away. Scaling isn't going to continue forever, and people want mobility, so the crazies talking about
> scaling to hundreds of cores are just that - crazy. Why give them an ounce of credibility?
>
> Where the hell do you envision that those magical parallel algorithms would be used?
>
> The only place where parallelism matters is in graphics or on the server side,
> where we already largely have it. Pushing it anywhere else is just pointless.
>
> So give up on parallelism already. It's not going to happen. End users are fine with roughly
> on the order of four cores, and you can't fit any more anyway without using too much energy
> to be practical in that space. And nobody sane would make the cores smaller and weaker in order
> to fit more of them - the only reason to make them smaller and weaker is because you want to
> go even further down in power use, so you'd still not have lots of those weak cores.
>
> So the whole argument that people should parallelise their code is fundamentally flawed.
> It rests on incorrect assumptions. It's a fad that has been going on too long.
>
> Parallel code makes sense in the few cases I mentioned, where we already largely have
> it covered, because in the server space, people have been parallel for a long time.
>
> It does not necessarily make sense elsewhere. Even in completely new areas that we don't
> do today because you cant' afford it. If you want to do low-power ubiquotous computer vision
> etc, I can pretty much guarantee that you're not going to do it with code on a GP CPU. You're
> likely not even going to do it on a GPU because even that is too expensive (power wise),
> but with specialized hardware, probably based on some neural network model.
>
> Give it up. The whole "parallel computing is the future" is a bunch of crock.
>
> Linus
FUCK