By: Ilya Lipovsky (lipovsky.delete@this.cs.bu.edu), February 22, 2008 5:27 pm
Room: Moderated Discussions
Linus Torvalds (torvalds@osdl.org) on 2/15/08 wrote:
---------------------------
>David Patterson (pattrsn@cs.berkeley.edu) on 2/15/08 wrote:
>>
>>* The goal is to raise the level of abstraction to allow
>>people the space that we'll need to be able to make the
>>manycore bet work, rather than to be hamstrung by 15-year
>>old legacy code written in 30-year old progamming
>>languages.
>
>Well, you basically start off by just assuming it can
>work, and that nothing else can.
>
>That's a big assumption. It's by no means something you
>should take for granted. It's a wish that hasn't come to
>fruition so far, and quite frankly, I don't think people
>are really any closer to a solution today than they were
>two decades ago.
>
>The fact that you can find application domains where it
>does work isn't new.
>
>We've had our CM-5's, we've had our Occam programs, and
>there's no question they worked. The question is whether
>they work for general-purpose computing, and that one is
>still unanswered, I think.
>
>>* Apparently some readers skipped the part where we looked
>>at the SPEC benchmarks, the embedded EEMBBC benchmarks,
>>and then interviewed experts in databases, machine
>>learning, graphics as well as high performance computing
>>in trying to see if there was a short list of design
>>patterns.
>
>No, I don't think people missed that. But you simplified
>those questions down to the point where it's not clear that
>your answers matched the question any more.
>
>For example, sure, you can claim that gcc boils down to
>a "finite state machine". In a sense pretty much
>anything boils down to that (are we Turing compete
>or not?), but that doesn't really say that the dwarf would
>be at all representing what gcc really ends up doing.
>
>And one of the projects I work on is almost purely a
>"graph traversal with hashing", so it should match your
>dwarf to a T, but the thing is, the biggest issue is how
>to minimize the size of the graph you have to traverse, not
>to traverse it as fast as possible. The problem isn't CPU
>time, it's memory and IO footprint.
>
>And I don't think that is unheard of elsewhere. The core
>algorithm could well be parallelizable, but the problem
>isn't the linear CPU speed, it's the things outside
>the CPU.
>
>>Our bet is that the best applications, the best programming
>>languages, the best libraries,... have not yet been
>>written.
>
>If we are looking at a 100+ core future, I certainly agree.
>
>>If we as field can succeed at this amazingly difficult
>>challenge, the future looks good. If not, then performance
>>increases we have relied upon for decades will come
>>to an abrupt halt, likely dimishing the future of the IT
>>industry.
>
>Here's my personal prediction, and hey, it's just that: a
>guess:
>(a) we'll continue to be largely dominated by linear
>issues in a majority of loads.
>(b) this may well mean that the future of GP computing
>ends up being about small, and low power, and being
>absolutely everywhere (== really dirty cheap).
>
>IOW, the expectation of exponential performance scaling
>may simply not be the thing that we even want. Yeah,
>we'll get it for those nice parallel loads, but rather
>than expect everything to get there, maybe we should just
>look forward to improving IT in other directions than pure
>performance.
>
>If the choice becomes one of "parallel but fast machines"
>and "really small and really cheap and really low power
>and 'fast enough' ones with just a coule of cores", maybe
>people will really pick the latter.
>
>Especially if it proves that the parallel problem really
>isn't practically solvable for a lot of things that people
>want to do.
>
>Pessimistic? It depends on what you look forward to.
>
>Linus
I dug up an interesting UCLA paper on how to utilize multiple core to speed up linear workloads:
http://www.cs.ucla.edu/~reinman/mars/papers/TPDS-07-CS.pdf
-Ilya
---------------------------
>David Patterson (pattrsn@cs.berkeley.edu) on 2/15/08 wrote:
>>
>>* The goal is to raise the level of abstraction to allow
>>people the space that we'll need to be able to make the
>>manycore bet work, rather than to be hamstrung by 15-year
>>old legacy code written in 30-year old progamming
>>languages.
>
>Well, you basically start off by just assuming it can
>work, and that nothing else can.
>
>That's a big assumption. It's by no means something you
>should take for granted. It's a wish that hasn't come to
>fruition so far, and quite frankly, I don't think people
>are really any closer to a solution today than they were
>two decades ago.
>
>The fact that you can find application domains where it
>does work isn't new.
>
>We've had our CM-5's, we've had our Occam programs, and
>there's no question they worked. The question is whether
>they work for general-purpose computing, and that one is
>still unanswered, I think.
>
>>* Apparently some readers skipped the part where we looked
>>at the SPEC benchmarks, the embedded EEMBBC benchmarks,
>>and then interviewed experts in databases, machine
>>learning, graphics as well as high performance computing
>>in trying to see if there was a short list of design
>>patterns.
>
>No, I don't think people missed that. But you simplified
>those questions down to the point where it's not clear that
>your answers matched the question any more.
>
>For example, sure, you can claim that gcc boils down to
>a "finite state machine". In a sense pretty much
>anything boils down to that (are we Turing compete
>or not?), but that doesn't really say that the dwarf would
>be at all representing what gcc really ends up doing.
>
>And one of the projects I work on is almost purely a
>"graph traversal with hashing", so it should match your
>dwarf to a T, but the thing is, the biggest issue is how
>to minimize the size of the graph you have to traverse, not
>to traverse it as fast as possible. The problem isn't CPU
>time, it's memory and IO footprint.
>
>And I don't think that is unheard of elsewhere. The core
>algorithm could well be parallelizable, but the problem
>isn't the linear CPU speed, it's the things outside
>the CPU.
>
>>Our bet is that the best applications, the best programming
>>languages, the best libraries,... have not yet been
>>written.
>
>If we are looking at a 100+ core future, I certainly agree.
>
>>If we as field can succeed at this amazingly difficult
>>challenge, the future looks good. If not, then performance
>>increases we have relied upon for decades will come
>>to an abrupt halt, likely dimishing the future of the IT
>>industry.
>
>Here's my personal prediction, and hey, it's just that: a
>guess:
>(a) we'll continue to be largely dominated by linear
>issues in a majority of loads.
>(b) this may well mean that the future of GP computing
>ends up being about small, and low power, and being
>absolutely everywhere (== really dirty cheap).
>
>IOW, the expectation of exponential performance scaling
>may simply not be the thing that we even want. Yeah,
>we'll get it for those nice parallel loads, but rather
>than expect everything to get there, maybe we should just
>look forward to improving IT in other directions than pure
>performance.
>
>If the choice becomes one of "parallel but fast machines"
>and "really small and really cheap and really low power
>and 'fast enough' ones with just a coule of cores", maybe
>people will really pick the latter.
>
>Especially if it proves that the parallel problem really
>isn't practically solvable for a lot of things that people
>want to do.
>
>Pessimistic? It depends on what you look forward to.
>
>Linus
I dug up an interesting UCLA paper on how to utilize multiple core to speed up linear workloads:
http://www.cs.ucla.edu/~reinman/mars/papers/TPDS-07-CS.pdf
-Ilya
Topic | Posted By | Date |
---|---|---|
Multicore is unlikely to be the ideal answer. | Anders Jensen | 2008/02/14 03:24 AM |
And the links.. | Anders Jensen | 2008/02/14 03:25 AM |
Disappointing.. | Linus Torvalds | 2008/02/14 09:17 AM |
Disappointing.. | Mark Roulo | 2008/02/14 10:03 AM |
LOL (NT) | Linus Torvalds | 2008/02/14 04:43 PM |
Disappointing.. | David Patterson | 2008/02/15 10:53 AM |
Disappointing.. | Linus Torvalds | 2008/02/15 04:01 PM |
Disappointing.. | anon | 2008/02/15 07:54 PM |
Disappointing.. | JasonB | 2008/02/19 06:45 PM |
Disappointing.. | Ilya Lipovsky | 2008/02/22 05:27 PM |
Disappointing.. | Scott Bolt | 2008/03/16 10:36 AM |
Need for new programming languages | Vincent Diepeveen | 2008/02/19 05:18 AM |
Need for new programming languages | Pete Wilson | 2008/02/24 10:41 AM |
Disappointing.. | Zan | 2008/02/25 09:52 PM |
Disappointing.. | Robert Myers | 2008/02/19 08:47 PM |
Disappointing.. | Fred Bosick | 2008/02/22 05:38 PM |
Disappointing.. | Robert Myers | 2008/03/01 12:17 PM |
The limits of single CPU speed are here. | John Nagle | 2008/03/14 09:55 AM |
The limits of single CPU speed are here. | Howard Chu | 2008/03/15 12:02 AM |
The limits of single CPU speed are here. | slacker | 2008/03/15 07:08 AM |
The limits of single CPU speed are here. | Howard Chu | 2008/03/17 12:47 AM |
The limits of single CPU speed are here. | slacker | 2008/03/17 09:04 AM |
And the links.. | Howard Chu | 2008/02/14 11:58 AM |
I take some of that back | Howard Chu | 2008/02/14 12:55 PM |
And the links.. | Jesper Frimann | 2008/02/14 01:02 PM |
And the links.. | Ilya Lipovsky | 2008/02/15 01:24 PM |
And the links.. | iz | 2008/02/17 09:55 AM |
And the links.. | JasonB | 2008/02/17 06:09 PM |
And the links.. | Ilya Lipovsky | 2008/02/18 12:54 PM |
And the links.. | JasonB | 2008/02/18 09:34 PM |
And the links.. | Thiago Kurovski | 2008/02/19 06:01 PM |
And the links.. | iz | 2008/02/20 09:36 AM |
And the links.. | Ilya Lipovsky | 2008/02/20 02:37 PM |
And the links.. | JasonB | 2008/02/20 05:28 PM |
And the links.. | JasonB | 2008/02/17 05:47 PM |
And the links.. | Ilya Lipovsky | 2008/02/18 01:27 PM |
And the links.. | JasonB | 2008/02/18 09:00 PM |
And the links.. | JasonB | 2008/02/19 02:14 AM |
And the links.. | Ilya Lipovsky | 2008/02/20 03:29 PM |
And the links.. | JasonB | 2008/02/20 05:14 PM |
And the links.. | Ilya Lipovsky | 2008/02/21 10:07 AM |
And the links.. | Howard Chu | 2008/02/14 12:16 PM |
And the links.. | Jukka Larja | 2008/02/15 02:00 AM |
Berkeley View on Parallelism | David Kanter | 2008/02/15 10:41 AM |
Berkeley View on Parallelism | Howard Chu | 2008/02/15 11:49 AM |
Berkeley View on Parallelism | David Kanter | 2008/02/15 02:48 PM |
Berkeley View on Parallelism | Howard Chu | 2008/02/17 04:42 PM |
Berkeley View on Parallelism | nick | 2008/02/17 08:15 PM |
Berkeley View on Parallelism | Howard Chu | 2008/02/18 03:23 PM |
Berkeley View on Parallelism | nick | 2008/02/18 09:03 PM |
Berkeley View on Parallelism | Howard Chu | 2008/02/19 12:39 AM |
Berkeley View on Parallelism | rcf | 2008/02/19 11:44 AM |
Berkeley View on Parallelism | Howard Chu | 2008/02/19 02:25 PM |
Average programmers | anon | 2008/02/18 11:40 AM |
Berkeley View on Parallelism | JasonB | 2008/02/15 07:02 PM |
Berkeley View on Parallelism | JasonB | 2008/02/15 07:02 PM |
Berkeley View on Parallelism | Dean Kent | 2008/02/15 07:07 PM |
Berkeley View on Parallelism | Ray | 2008/02/20 02:20 PM |
Berkeley View on Parallelism | JasonB | 2008/02/20 05:11 PM |
Berkeley View on Parallelism | FritzR | 2008/02/24 02:08 PM |
rubyinline, etc. | nordsieck | 2008/02/22 02:38 PM |
rubyinline, etc. | JasonB | 2008/02/23 04:53 AM |
rubyinline, etc. | nordsieck | 2008/03/02 12:40 AM |
rubyinline, etc. | Michael S | 2008/03/02 01:49 AM |
rubyinline, etc. | Dean Kent | 2008/03/02 06:41 AM |
rubyinline, etc. | Michael S | 2008/03/02 07:19 AM |
rubyinline, etc. | Dean Kent | 2008/03/02 07:30 AM |
rubyinline, etc. | JasonB | 2008/03/02 04:26 PM |
rubyinline, etc. | JasonB | 2008/03/02 05:01 PM |
rubyinline, etc. | Anonymous | 2008/03/03 01:11 AM |
rubyinline, etc. | JasonB | 2008/03/03 08:40 AM |
rubyinline, etc. | Foo_ | 2008/03/09 08:59 AM |
rubyinline, etc. | JasonB | 2008/03/10 12:12 AM |
rubyinline, etc. | Gabriele Svelto | 2008/03/10 01:22 AM |
rubyinline, etc. | JasonB | 2008/03/10 03:35 AM |
C++ for beginners | Michael S | 2008/03/10 04:16 AM |
C++ for beginners | JasonB | 2008/03/10 05:35 AM |
C++ | Michael S | 2008/03/10 03:55 AM |
rubyinline, etc. | Linus Torvalds | 2008/03/03 10:35 AM |
rubyinline, etc. | Dean Kent | 2008/03/03 01:35 PM |
rubyinline, etc. | JasonB | 2008/03/03 02:57 PM |
rubyinline, etc. | Dean Kent | 2008/03/03 07:10 PM |
rubyinline, etc. | Michael S | 2008/03/04 12:53 AM |
rubyinline, etc. | Dean Kent | 2008/03/04 06:51 AM |
rubyinline, etc. | Michael S | 2008/03/04 07:29 AM |
rubyinline, etc. | Dean Kent | 2008/03/04 07:53 AM |
rubyinline, etc. | Michael S | 2008/03/04 10:20 AM |
rubyinline, etc. | Dean Kent | 2008/03/04 01:13 PM |
read it. thanks (NT) | Michael S | 2008/03/04 03:31 PM |
efficient HLL's | Patrik Hägglund | 2008/03/04 02:34 PM |
efficient HLL's | Wes Felter | 2008/03/04 08:33 PM |
efficient HLL's | Patrik Hägglund | 2008/03/05 12:23 AM |
efficient HLL's | Michael S | 2008/03/05 01:45 AM |
efficient HLL's | Wilco | 2008/03/05 04:34 PM |
efficient HLL's | Howard Chu | 2008/03/05 06:11 PM |
efficient HLL's | Wilco | 2008/03/06 01:27 PM |
efficient HLL's | anon | 2008/03/05 07:20 AM |
And the links.. | Groo | 2008/02/17 03:28 PM |
And the links.. | Vincent Diepeveen | 2008/02/18 01:33 AM |