By: David Kanter (dkanter.delete@this.realworldtech.com), February 15, 2008 11:41 am
Room: Moderated Discussions
Howard Chu (hyc@symas.com) on 2/14/08 wrote:
---------------------------
>Anders Jensen (@.) on 2/14/08 wrote:
>---------------------------
>>Anders Jensen (@.) on 2/14/08 wrote:
>>---------------------------
>>>Quoted from the white paper "The Landscape of Parallel Computing Research: A View from Berkeley".
>>>
>>>This paper just gave Berkeley $10M over 5 years from MS and Intel to research the
>>>future of parallel computing. Happy reading.
>>http://www.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-183.html
>
>* The overarching goal should be to make it easy to write programs that execute
>efficiently on highly parallel computing systems
>
>Of course "easy" and "efficient" are opposed goals.
Of course, but python is a reasonably efficient language and it is easy. I think the idea is to provide a parallel language that is simple and easy for the app programmers, and then system developers can use C or C++ or whatever they want.
>* Instead of traditional benchmarks, use 13 "Dwarfs" to >design and evaluate
>parallel programming models and architectures. (A dwarf is >an algorithmic method
>that captures a pattern of computation and communication.)
>
>uh huh, sure. micro-benchmarks by another name, which all >happen to have a nasty
>habit of giving zero predictive worth once they're all >combined into a full running system.
I think there are several issues with the dwarf approach. As you pointed out, they are not full applications. Patterson's group has previously done some bad work as a result of using microbenchmarks that do not reflect real workloads (register windows). Of course, that was 10-20 years ago, and this is a totally different set of students - I'm trusting they won't make that mistakes.
While the 13 dwarfs *may* be representative of workloads in the future, they currently aren't all that commonly used and don't reflect current applications. The big problem is that almost every bit of computer architecture research has shown that outside of a few niche markets, you cannot force people to rewrite their software. Sure the smart guys will, but I remember working for Boeing in 2000, and they still had to use DOS to run some mission critical applications. Inertia is a bitch.
I'm also not 100% sure how the dwarves were chosen - was it on the basis of being parallel themselves, or was it because people are genuinely interested in them.
>* "Autotuners" should play a larger role than conventional compilers in translating parallel programs.
>
>These guys seem to like introspective JVMs and JIT optimizers. I always view this
>as a losing proposition. I can either have 100% of my CPU resources crunching a
>solution to my problem, or 100-N% crunching, and N% trying to dynamically re-optimize
>my code. Hint - write your code correctly in the first >place.
This is one place where I think they are spot on. Parallel programming is obscenely expensive, and we need to drive the cost down in order to get more out of future MPUs. One way to do that is to ensure that a lot of parallelism is automatically extracted.
Once upon a time, people thought compilers were stupid. But once they were able to get 80-90% of the performance of a human coding assembly, they became pretty damn popular (that and when architectures stopped being programmer friendly).
I think there's a clear trend towards HLL for applications programmers. I don't see why this would stop for parallelism.
>* To maximize programmer productivity, future programming >models must be more
>human-centric than the conventional focus on hardware or >applications.
>
>Kinda like what I touched on before, designing programming >languages whose input
>tokens aren't character-based. Aside from that it's all >bunk. 3000+ years ago a
>guy named Hercules had to clean out the Augean stables. >Today mucking horse stalls
>is still a dirty job. That's the nature of the job.
Some things are easier in python than in C though...
>* To be successful, programming models should be independent of the number of processors.
>* To maximize application efficiency, programming models should support a wide
>range of data types and successful models of parallelism: task-level parallelism,
>word-level parallelism, and bit-level parallelism.
What about instruction level parallelism? And TLP...
>* Architects should not include features that significantly affect performance
>or energy if programmers cannot accurately measure their >impact via performance counters and energy counters.
This sounds like a good idea. Energy counters is definitely an interesting one.
[snip]
>* To explore the design space rapidly, use system emulators >based on Field
>Programmable Gate Arrays (FPGAs) that are highly scalable >and low cost.
Ah, someone is still pimping RAMP, I see.
David
---------------------------
>Anders Jensen (@.) on 2/14/08 wrote:
>---------------------------
>>Anders Jensen (@.) on 2/14/08 wrote:
>>---------------------------
>>>Quoted from the white paper "The Landscape of Parallel Computing Research: A View from Berkeley".
>>>
>>>This paper just gave Berkeley $10M over 5 years from MS and Intel to research the
>>>future of parallel computing. Happy reading.
>>http://www.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-183.html
>
>* The overarching goal should be to make it easy to write programs that execute
>efficiently on highly parallel computing systems
>
>Of course "easy" and "efficient" are opposed goals.
Of course, but python is a reasonably efficient language and it is easy. I think the idea is to provide a parallel language that is simple and easy for the app programmers, and then system developers can use C or C++ or whatever they want.
>* Instead of traditional benchmarks, use 13 "Dwarfs" to >design and evaluate
>parallel programming models and architectures. (A dwarf is >an algorithmic method
>that captures a pattern of computation and communication.)
>
>uh huh, sure. micro-benchmarks by another name, which all >happen to have a nasty
>habit of giving zero predictive worth once they're all >combined into a full running system.
I think there are several issues with the dwarf approach. As you pointed out, they are not full applications. Patterson's group has previously done some bad work as a result of using microbenchmarks that do not reflect real workloads (register windows). Of course, that was 10-20 years ago, and this is a totally different set of students - I'm trusting they won't make that mistakes.
While the 13 dwarfs *may* be representative of workloads in the future, they currently aren't all that commonly used and don't reflect current applications. The big problem is that almost every bit of computer architecture research has shown that outside of a few niche markets, you cannot force people to rewrite their software. Sure the smart guys will, but I remember working for Boeing in 2000, and they still had to use DOS to run some mission critical applications. Inertia is a bitch.
I'm also not 100% sure how the dwarves were chosen - was it on the basis of being parallel themselves, or was it because people are genuinely interested in them.
>* "Autotuners" should play a larger role than conventional compilers in translating parallel programs.
>
>These guys seem to like introspective JVMs and JIT optimizers. I always view this
>as a losing proposition. I can either have 100% of my CPU resources crunching a
>solution to my problem, or 100-N% crunching, and N% trying to dynamically re-optimize
>my code. Hint - write your code correctly in the first >place.
This is one place where I think they are spot on. Parallel programming is obscenely expensive, and we need to drive the cost down in order to get more out of future MPUs. One way to do that is to ensure that a lot of parallelism is automatically extracted.
Once upon a time, people thought compilers were stupid. But once they were able to get 80-90% of the performance of a human coding assembly, they became pretty damn popular (that and when architectures stopped being programmer friendly).
I think there's a clear trend towards HLL for applications programmers. I don't see why this would stop for parallelism.
>* To maximize programmer productivity, future programming >models must be more
>human-centric than the conventional focus on hardware or >applications.
>
>Kinda like what I touched on before, designing programming >languages whose input
>tokens aren't character-based. Aside from that it's all >bunk. 3000+ years ago a
>guy named Hercules had to clean out the Augean stables. >Today mucking horse stalls
>is still a dirty job. That's the nature of the job.
Some things are easier in python than in C though...
>* To be successful, programming models should be independent of the number of processors.
>* To maximize application efficiency, programming models should support a wide
>range of data types and successful models of parallelism: task-level parallelism,
>word-level parallelism, and bit-level parallelism.
What about instruction level parallelism? And TLP...
>* Architects should not include features that significantly affect performance
>or energy if programmers cannot accurately measure their >impact via performance counters and energy counters.
This sounds like a good idea. Energy counters is definitely an interesting one.
[snip]
>* To explore the design space rapidly, use system emulators >based on Field
>Programmable Gate Arrays (FPGAs) that are highly scalable >and low cost.
Ah, someone is still pimping RAMP, I see.
David
Topic | Posted By | Date |
---|---|---|
Multicore is unlikely to be the ideal answer. | Anders Jensen | 2008/02/14 04:24 AM |
And the links.. | Anders Jensen | 2008/02/14 04:25 AM |
Disappointing.. | Linus Torvalds | 2008/02/14 10:17 AM |
Disappointing.. | Mark Roulo | 2008/02/14 11:03 AM |
LOL (NT) | Linus Torvalds | 2008/02/14 05:43 PM |
Disappointing.. | David Patterson | 2008/02/15 11:53 AM |
Disappointing.. | Linus Torvalds | 2008/02/15 05:01 PM |
Disappointing.. | anon | 2008/02/15 08:54 PM |
Disappointing.. | JasonB | 2008/02/19 07:45 PM |
Disappointing.. | Ilya Lipovsky | 2008/02/22 06:27 PM |
Disappointing.. | Scott Bolt | 2008/03/16 11:36 AM |
Need for new programming languages | Vincent Diepeveen | 2008/02/19 06:18 AM |
Need for new programming languages | Pete Wilson | 2008/02/24 11:41 AM |
Disappointing.. | Zan | 2008/02/25 10:52 PM |
Disappointing.. | Robert Myers | 2008/02/19 09:47 PM |
Disappointing.. | Fred Bosick | 2008/02/22 06:38 PM |
Disappointing.. | Robert Myers | 2008/03/01 01:17 PM |
The limits of single CPU speed are here. | John Nagle | 2008/03/14 10:55 AM |
The limits of single CPU speed are here. | Howard Chu | 2008/03/15 01:02 AM |
The limits of single CPU speed are here. | slacker | 2008/03/15 08:08 AM |
The limits of single CPU speed are here. | Howard Chu | 2008/03/17 01:47 AM |
The limits of single CPU speed are here. | slacker | 2008/03/17 10:04 AM |
And the links.. | Howard Chu | 2008/02/14 12:58 PM |
I take some of that back | Howard Chu | 2008/02/14 01:55 PM |
And the links.. | Jesper Frimann | 2008/02/14 02:02 PM |
And the links.. | Ilya Lipovsky | 2008/02/15 02:24 PM |
And the links.. | iz | 2008/02/17 10:55 AM |
And the links.. | JasonB | 2008/02/17 07:09 PM |
And the links.. | Ilya Lipovsky | 2008/02/18 01:54 PM |
And the links.. | JasonB | 2008/02/18 10:34 PM |
And the links.. | Thiago Kurovski | 2008/02/19 07:01 PM |
And the links.. | iz | 2008/02/20 10:36 AM |
And the links.. | Ilya Lipovsky | 2008/02/20 03:37 PM |
And the links.. | JasonB | 2008/02/20 06:28 PM |
And the links.. | JasonB | 2008/02/17 06:47 PM |
And the links.. | Ilya Lipovsky | 2008/02/18 02:27 PM |
And the links.. | JasonB | 2008/02/18 10:00 PM |
And the links.. | JasonB | 2008/02/19 03:14 AM |
And the links.. | Ilya Lipovsky | 2008/02/20 04:29 PM |
And the links.. | JasonB | 2008/02/20 06:14 PM |
And the links.. | Ilya Lipovsky | 2008/02/21 11:07 AM |
And the links.. | Howard Chu | 2008/02/14 01:16 PM |
And the links.. | Jukka Larja | 2008/02/15 03:00 AM |
Berkeley View on Parallelism | David Kanter | 2008/02/15 11:41 AM |
Berkeley View on Parallelism | Howard Chu | 2008/02/15 12:49 PM |
Berkeley View on Parallelism | David Kanter | 2008/02/15 03:48 PM |
Berkeley View on Parallelism | Howard Chu | 2008/02/17 05:42 PM |
Berkeley View on Parallelism | nick | 2008/02/17 09:15 PM |
Berkeley View on Parallelism | Howard Chu | 2008/02/18 04:23 PM |
Berkeley View on Parallelism | nick | 2008/02/18 10:03 PM |
Berkeley View on Parallelism | Howard Chu | 2008/02/19 01:39 AM |
Berkeley View on Parallelism | rcf | 2008/02/19 12:44 PM |
Berkeley View on Parallelism | Howard Chu | 2008/02/19 03:25 PM |
Average programmers | anon | 2008/02/18 12:40 PM |
Berkeley View on Parallelism | JasonB | 2008/02/15 08:02 PM |
Berkeley View on Parallelism | JasonB | 2008/02/15 08:02 PM |
Berkeley View on Parallelism | Dean Kent | 2008/02/15 08:07 PM |
Berkeley View on Parallelism | Ray | 2008/02/20 03:20 PM |
Berkeley View on Parallelism | JasonB | 2008/02/20 06:11 PM |
Berkeley View on Parallelism | FritzR | 2008/02/24 03:08 PM |
rubyinline, etc. | nordsieck | 2008/02/22 03:38 PM |
rubyinline, etc. | JasonB | 2008/02/23 05:53 AM |
rubyinline, etc. | nordsieck | 2008/03/02 01:40 AM |
rubyinline, etc. | Michael S | 2008/03/02 02:49 AM |
rubyinline, etc. | Dean Kent | 2008/03/02 07:41 AM |
rubyinline, etc. | Michael S | 2008/03/02 08:19 AM |
rubyinline, etc. | Dean Kent | 2008/03/02 08:30 AM |
rubyinline, etc. | JasonB | 2008/03/02 05:26 PM |
rubyinline, etc. | JasonB | 2008/03/02 06:01 PM |
rubyinline, etc. | Anonymous | 2008/03/03 02:11 AM |
rubyinline, etc. | JasonB | 2008/03/03 09:40 AM |
rubyinline, etc. | Foo_ | 2008/03/09 09:59 AM |
rubyinline, etc. | JasonB | 2008/03/10 01:12 AM |
rubyinline, etc. | Gabriele Svelto | 2008/03/10 02:22 AM |
rubyinline, etc. | JasonB | 2008/03/10 04:35 AM |
C++ for beginners | Michael S | 2008/03/10 05:16 AM |
C++ for beginners | JasonB | 2008/03/10 06:35 AM |
C++ | Michael S | 2008/03/10 04:55 AM |
rubyinline, etc. | Linus Torvalds | 2008/03/03 11:35 AM |
rubyinline, etc. | Dean Kent | 2008/03/03 02:35 PM |
rubyinline, etc. | JasonB | 2008/03/03 03:57 PM |
rubyinline, etc. | Dean Kent | 2008/03/03 08:10 PM |
rubyinline, etc. | Michael S | 2008/03/04 01:53 AM |
rubyinline, etc. | Dean Kent | 2008/03/04 07:51 AM |
rubyinline, etc. | Michael S | 2008/03/04 08:29 AM |
rubyinline, etc. | Dean Kent | 2008/03/04 08:53 AM |
rubyinline, etc. | Michael S | 2008/03/04 11:20 AM |
rubyinline, etc. | Dean Kent | 2008/03/04 02:13 PM |
read it. thanks (NT) | Michael S | 2008/03/04 04:31 PM |
efficient HLL's | Patrik Hägglund | 2008/03/04 03:34 PM |
efficient HLL's | Wes Felter | 2008/03/04 09:33 PM |
efficient HLL's | Patrik Hägglund | 2008/03/05 01:23 AM |
efficient HLL's | Michael S | 2008/03/05 02:45 AM |
efficient HLL's | Wilco | 2008/03/05 05:34 PM |
efficient HLL's | Howard Chu | 2008/03/05 07:11 PM |
efficient HLL's | Wilco | 2008/03/06 02:27 PM |
efficient HLL's | anon | 2008/03/05 08:20 AM |
And the links.. | Groo | 2008/02/17 04:28 PM |
And the links.. | Vincent Diepeveen | 2008/02/18 02:33 AM |