By: Vincent Diepeveen (diep.delete@this.xs4all.nl), February 19, 2008 6:18 am
Room: Moderated Discussions
David Patterson (pattrsn@cs.berkeley.edu) on 2/15/08 wrote:
---------------------------
>Since we spent almost 2 years of our lives working on the this report, I'd add my perspective to this discussion.
>* The goal is to raise the level of abstraction to allow people the space that
>we'll need to be able to make the manycore bet work, rather than to be hamstrung
>by 15-year old legacy code written in 30-year old progamming languages.
>* This report is more than a year old, and we now realize that what we were really
>talking about is design patterns, in the sense of the original 1997 book "A Pattern
>Language" by the Berkeley architecture professor Christopher Alexander (as opposed
>to the Gang of Four book on OO programming inspired by the Berkeley book)
>* Apparently some readers skipped the part where we looked at the SPEC benchmarks,
>the embedded EEMBBC benchmarks, and then interviewed experts in databases, machine
>learning, graphics as well as high performance computing in trying to see if there was a short list of design patterns.
>* Based on our 2 year investigation, we make the provactive claim that your programming
>lanugage, compiler, libraries, computer architecture ... better be able to handle
>these design patterns well, because they will be important in the upcoming decade
>in many apps. There are likely more design patterns that these 13, but they include, for example
>- Finite State Machines
>- Branch and Bound
>- Graph Algorithms
>which aren't in most people's lists of scientific computing problems.
>
>Our bet is that the best applications, the best programming languages, the best
>libraries,... have not yet been written.
Without wanting to selective quote, it sure is true that there is a lot of room to improve upon programming languages.
The best programming language has not been invented yet. Instead the current object oriented languages are the opposite of what we want.
What happens in big projects of a lot of codelines is that templates and all kind of subclasses and all other fancy c++ type codes. Vectors, inheritance, you can list it all yourself very well; all those concepts are creating a huge code sizes. Software that just grows and grows in size.
The biggest problem in such big software is the hard fact that object oriented programming is perhaps interesting on paper, memory allocation and deallocation is a real ugly slow operation at the processor.
Also such programming languages ultimately have big problems with multithreading/multiprocessing.
Want many cores?
Fine, but you'll get effectively a core less than what we had a few years ago on a single cpu available.
So professors and researchers, wake up, please design a new generation language that can generate the same speed of code like C can and combine that with all kind of nice features such as C++ has for big projects.
JAVA and C#, though they fill a certain market need, are not only slow for speedy code, it's also having the object orientation problem of how to deal efficiently with RAM.
The challenge of the next generation software is to keep an oversight at a big product without slowing it down factor 10 to 100 (because of 'generic coding standards', or whatever).
Vincent Diepeveen,
Amsterdam,
The Netherlands
>The challenge for this next generation of software to be correct, efficient, and
>scale with the increasing number of processors, without overburdening programmers.
>If we as field can succeed at this amazingly difficult challenge, the future looks
>good. If not, then performance increases we have relied upon for decades will come
>to an abrupt halt, likely dimishing the future of the IT industry.
>
>Dave Patteron, UC Berkeley
>
>
>Linus Torvalds (torvalds@osdl.org) on 2/14/08 wrote:
>---------------------------
>>Ugh. They seem to make essentially all of their arguments
>>based on their "dwarfs" (shouldn't that be "vertically
>>challenged algorithm"?).
>>
>>And their dwarfs in turn seem entirely selected to then
>>support the end result they wanted. Can anybody say
>>"circular argument" ten times fast?
>>
>>Apart from the obvious graphics thing, none of their loads
>>seem at all relevant to "general purpose computing", they
>>are all essentially about scientific computing.
>>
>>And we already pretty much know the solution to scientific
>>computing: throw lots of cheap hardware on it (where "cheap"
>>is then defined by what is mass-produced for other reasons).
>>
>>Designing future hardware around the needs of scientific
>>computing seems ass-backwards. It's putting the cart in
>>front of the horse.
>>
>>Linus
>
---------------------------
>Since we spent almost 2 years of our lives working on the this report, I'd add my perspective to this discussion.
>* The goal is to raise the level of abstraction to allow people the space that
>we'll need to be able to make the manycore bet work, rather than to be hamstrung
>by 15-year old legacy code written in 30-year old progamming languages.
>* This report is more than a year old, and we now realize that what we were really
>talking about is design patterns, in the sense of the original 1997 book "A Pattern
>Language" by the Berkeley architecture professor Christopher Alexander (as opposed
>to the Gang of Four book on OO programming inspired by the Berkeley book)
>* Apparently some readers skipped the part where we looked at the SPEC benchmarks,
>the embedded EEMBBC benchmarks, and then interviewed experts in databases, machine
>learning, graphics as well as high performance computing in trying to see if there was a short list of design patterns.
>* Based on our 2 year investigation, we make the provactive claim that your programming
>lanugage, compiler, libraries, computer architecture ... better be able to handle
>these design patterns well, because they will be important in the upcoming decade
>in many apps. There are likely more design patterns that these 13, but they include, for example
>- Finite State Machines
>- Branch and Bound
>- Graph Algorithms
>which aren't in most people's lists of scientific computing problems.
>
>Our bet is that the best applications, the best programming languages, the best
>libraries,... have not yet been written.
Without wanting to selective quote, it sure is true that there is a lot of room to improve upon programming languages.
The best programming language has not been invented yet. Instead the current object oriented languages are the opposite of what we want.
What happens in big projects of a lot of codelines is that templates and all kind of subclasses and all other fancy c++ type codes. Vectors, inheritance, you can list it all yourself very well; all those concepts are creating a huge code sizes. Software that just grows and grows in size.
The biggest problem in such big software is the hard fact that object oriented programming is perhaps interesting on paper, memory allocation and deallocation is a real ugly slow operation at the processor.
Also such programming languages ultimately have big problems with multithreading/multiprocessing.
Want many cores?
Fine, but you'll get effectively a core less than what we had a few years ago on a single cpu available.
So professors and researchers, wake up, please design a new generation language that can generate the same speed of code like C can and combine that with all kind of nice features such as C++ has for big projects.
JAVA and C#, though they fill a certain market need, are not only slow for speedy code, it's also having the object orientation problem of how to deal efficiently with RAM.
The challenge of the next generation software is to keep an oversight at a big product without slowing it down factor 10 to 100 (because of 'generic coding standards', or whatever).
Vincent Diepeveen,
Amsterdam,
The Netherlands
>The challenge for this next generation of software to be correct, efficient, and
>scale with the increasing number of processors, without overburdening programmers.
>If we as field can succeed at this amazingly difficult challenge, the future looks
>good. If not, then performance increases we have relied upon for decades will come
>to an abrupt halt, likely dimishing the future of the IT industry.
>
>Dave Patteron, UC Berkeley
>
>
>Linus Torvalds (torvalds@osdl.org) on 2/14/08 wrote:
>---------------------------
>>Ugh. They seem to make essentially all of their arguments
>>based on their "dwarfs" (shouldn't that be "vertically
>>challenged algorithm"?).
>>
>>And their dwarfs in turn seem entirely selected to then
>>support the end result they wanted. Can anybody say
>>"circular argument" ten times fast?
>>
>>Apart from the obvious graphics thing, none of their loads
>>seem at all relevant to "general purpose computing", they
>>are all essentially about scientific computing.
>>
>>And we already pretty much know the solution to scientific
>>computing: throw lots of cheap hardware on it (where "cheap"
>>is then defined by what is mass-produced for other reasons).
>>
>>Designing future hardware around the needs of scientific
>>computing seems ass-backwards. It's putting the cart in
>>front of the horse.
>>
>>Linus
>
Topic | Posted By | Date |
---|---|---|
Multicore is unlikely to be the ideal answer. | Anders Jensen | 2008/02/14 04:24 AM |
And the links.. | Anders Jensen | 2008/02/14 04:25 AM |
Disappointing.. | Linus Torvalds | 2008/02/14 10:17 AM |
Disappointing.. | Mark Roulo | 2008/02/14 11:03 AM |
LOL (NT) | Linus Torvalds | 2008/02/14 05:43 PM |
Disappointing.. | David Patterson | 2008/02/15 11:53 AM |
Disappointing.. | Linus Torvalds | 2008/02/15 05:01 PM |
Disappointing.. | anon | 2008/02/15 08:54 PM |
Disappointing.. | JasonB | 2008/02/19 07:45 PM |
Disappointing.. | Ilya Lipovsky | 2008/02/22 06:27 PM |
Disappointing.. | Scott Bolt | 2008/03/16 11:36 AM |
Need for new programming languages | Vincent Diepeveen | 2008/02/19 06:18 AM |
Need for new programming languages | Pete Wilson | 2008/02/24 11:41 AM |
Disappointing.. | Zan | 2008/02/25 10:52 PM |
Disappointing.. | Robert Myers | 2008/02/19 09:47 PM |
Disappointing.. | Fred Bosick | 2008/02/22 06:38 PM |
Disappointing.. | Robert Myers | 2008/03/01 01:17 PM |
The limits of single CPU speed are here. | John Nagle | 2008/03/14 10:55 AM |
The limits of single CPU speed are here. | Howard Chu | 2008/03/15 01:02 AM |
The limits of single CPU speed are here. | slacker | 2008/03/15 08:08 AM |
The limits of single CPU speed are here. | Howard Chu | 2008/03/17 01:47 AM |
The limits of single CPU speed are here. | slacker | 2008/03/17 10:04 AM |
And the links.. | Howard Chu | 2008/02/14 12:58 PM |
I take some of that back | Howard Chu | 2008/02/14 01:55 PM |
And the links.. | Jesper Frimann | 2008/02/14 02:02 PM |
And the links.. | Ilya Lipovsky | 2008/02/15 02:24 PM |
And the links.. | iz | 2008/02/17 10:55 AM |
And the links.. | JasonB | 2008/02/17 07:09 PM |
And the links.. | Ilya Lipovsky | 2008/02/18 01:54 PM |
And the links.. | JasonB | 2008/02/18 10:34 PM |
And the links.. | Thiago Kurovski | 2008/02/19 07:01 PM |
And the links.. | iz | 2008/02/20 10:36 AM |
And the links.. | Ilya Lipovsky | 2008/02/20 03:37 PM |
And the links.. | JasonB | 2008/02/20 06:28 PM |
And the links.. | JasonB | 2008/02/17 06:47 PM |
And the links.. | Ilya Lipovsky | 2008/02/18 02:27 PM |
And the links.. | JasonB | 2008/02/18 10:00 PM |
And the links.. | JasonB | 2008/02/19 03:14 AM |
And the links.. | Ilya Lipovsky | 2008/02/20 04:29 PM |
And the links.. | JasonB | 2008/02/20 06:14 PM |
And the links.. | Ilya Lipovsky | 2008/02/21 11:07 AM |
And the links.. | Howard Chu | 2008/02/14 01:16 PM |
And the links.. | Jukka Larja | 2008/02/15 03:00 AM |
Berkeley View on Parallelism | David Kanter | 2008/02/15 11:41 AM |
Berkeley View on Parallelism | Howard Chu | 2008/02/15 12:49 PM |
Berkeley View on Parallelism | David Kanter | 2008/02/15 03:48 PM |
Berkeley View on Parallelism | Howard Chu | 2008/02/17 05:42 PM |
Berkeley View on Parallelism | nick | 2008/02/17 09:15 PM |
Berkeley View on Parallelism | Howard Chu | 2008/02/18 04:23 PM |
Berkeley View on Parallelism | nick | 2008/02/18 10:03 PM |
Berkeley View on Parallelism | Howard Chu | 2008/02/19 01:39 AM |
Berkeley View on Parallelism | rcf | 2008/02/19 12:44 PM |
Berkeley View on Parallelism | Howard Chu | 2008/02/19 03:25 PM |
Average programmers | anon | 2008/02/18 12:40 PM |
Berkeley View on Parallelism | JasonB | 2008/02/15 08:02 PM |
Berkeley View on Parallelism | JasonB | 2008/02/15 08:02 PM |
Berkeley View on Parallelism | Dean Kent | 2008/02/15 08:07 PM |
Berkeley View on Parallelism | Ray | 2008/02/20 03:20 PM |
Berkeley View on Parallelism | JasonB | 2008/02/20 06:11 PM |
Berkeley View on Parallelism | FritzR | 2008/02/24 03:08 PM |
rubyinline, etc. | nordsieck | 2008/02/22 03:38 PM |
rubyinline, etc. | JasonB | 2008/02/23 05:53 AM |
rubyinline, etc. | nordsieck | 2008/03/02 01:40 AM |
rubyinline, etc. | Michael S | 2008/03/02 02:49 AM |
rubyinline, etc. | Dean Kent | 2008/03/02 07:41 AM |
rubyinline, etc. | Michael S | 2008/03/02 08:19 AM |
rubyinline, etc. | Dean Kent | 2008/03/02 08:30 AM |
rubyinline, etc. | JasonB | 2008/03/02 05:26 PM |
rubyinline, etc. | JasonB | 2008/03/02 06:01 PM |
rubyinline, etc. | Anonymous | 2008/03/03 02:11 AM |
rubyinline, etc. | JasonB | 2008/03/03 09:40 AM |
rubyinline, etc. | Foo_ | 2008/03/09 09:59 AM |
rubyinline, etc. | JasonB | 2008/03/10 01:12 AM |
rubyinline, etc. | Gabriele Svelto | 2008/03/10 02:22 AM |
rubyinline, etc. | JasonB | 2008/03/10 04:35 AM |
C++ for beginners | Michael S | 2008/03/10 05:16 AM |
C++ for beginners | JasonB | 2008/03/10 06:35 AM |
C++ | Michael S | 2008/03/10 04:55 AM |
rubyinline, etc. | Linus Torvalds | 2008/03/03 11:35 AM |
rubyinline, etc. | Dean Kent | 2008/03/03 02:35 PM |
rubyinline, etc. | JasonB | 2008/03/03 03:57 PM |
rubyinline, etc. | Dean Kent | 2008/03/03 08:10 PM |
rubyinline, etc. | Michael S | 2008/03/04 01:53 AM |
rubyinline, etc. | Dean Kent | 2008/03/04 07:51 AM |
rubyinline, etc. | Michael S | 2008/03/04 08:29 AM |
rubyinline, etc. | Dean Kent | 2008/03/04 08:53 AM |
rubyinline, etc. | Michael S | 2008/03/04 11:20 AM |
rubyinline, etc. | Dean Kent | 2008/03/04 02:13 PM |
read it. thanks (NT) | Michael S | 2008/03/04 04:31 PM |
efficient HLL's | Patrik Hägglund | 2008/03/04 03:34 PM |
efficient HLL's | Wes Felter | 2008/03/04 09:33 PM |
efficient HLL's | Patrik Hägglund | 2008/03/05 01:23 AM |
efficient HLL's | Michael S | 2008/03/05 02:45 AM |
efficient HLL's | Wilco | 2008/03/05 05:34 PM |
efficient HLL's | Howard Chu | 2008/03/05 07:11 PM |
efficient HLL's | Wilco | 2008/03/06 02:27 PM |
efficient HLL's | anon | 2008/03/05 08:20 AM |
And the links.. | Groo | 2008/02/17 04:28 PM |
And the links.. | Vincent Diepeveen | 2008/02/18 02:33 AM |