By: Howard Chu (hyc.delete@this.symas.com), February 14, 2008 1:16 pm
Room: Moderated Discussions
Anders Jensen (@.) on 2/14/08 wrote:
---------------------------
>Anders Jensen (@.) on 2/14/08 wrote:
>---------------------------
>>Quoted from the white paper "The Landscape of Parallel Computing Research: A View from Berkeley".
>>
>>This paper just gave Berkeley $10M over 5 years from MS and Intel to research the
>>future of parallel computing. Happy reading.
>http://www.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-183.html
* The overarching goal should be to make it easy to write programs that execute efficiently on highly parallel computing systems
Of course "easy" and "efficient" are opposed goals.
* The target should be 1000s of cores per chip, as these chips are built from processing elements that are the most efficient in MIPS (Million Instructions per Second) per watt, MIPS per area of silicon, and MIPS per development dollar.
MIPS per development dollar seems like an empty target. Whenever you decide on any development direction, eventually the costs come down as you learn more. R&D should always be a cost-is-no-object proposition because once you find the best solution, someone will figure out how to bring the cost inline.
* Instead of traditional benchmarks, use 13 "Dwarfs" to design and evaluate parallel programming models and architectures. (A dwarf is an algorithmic method that captures a pattern of computation and communication.)
uh huh, sure. micro-benchmarks by another name, which all happen to have a nasty habit of giving zero predictive worth once they're all combined into a full running system.
* "Autotuners" should play a larger role than conventional compilers in translating parallel programs.
These guys seem to like introspective JVMs and JIT optimizers. I always view this as a losing proposition. I can either have 100% of my CPU resources crunching a solution to my problem, or 100-N% crunching, and N% trying to dynamically re-optimize my code. Hint - write your code correctly in the first place.
* To maximize programmer productivity, future programming models must be more human-centric than the conventional focus on hardware or applications.
Kinda like what I touched on before, designing programming languages whose input tokens aren't character-based. Aside from that it's all bunk. 3000+ years ago a guy named Hercules had to clean out the Augean stables. Today mucking horse stalls is still a dirty job. That's the nature of the job.
* To be successful, programming models should be independent of the number of processors.
* To maximize application efficiency, programming models should support a wide range of data types and successful models of parallelism: task-level parallelism, word-level parallelism, and bit-level parallelism.
* Architects should not include features that significantly affect performance or energy if programmers cannot accurately measure their impact via performance counters and energy counters.
* Traditional operating systems will be deconstructed and operating system functionality will be orchestrated using libraries and virtual machines.
Which directly opposes the stated goal of efficiency.
* To explore the design space rapidly, use system emulators based on Field Programmable Gate Arrays (FPGAs) that are highly scalable and low cost.
---------------------------
>Anders Jensen (@.) on 2/14/08 wrote:
>---------------------------
>>Quoted from the white paper "The Landscape of Parallel Computing Research: A View from Berkeley".
>>
>>This paper just gave Berkeley $10M over 5 years from MS and Intel to research the
>>future of parallel computing. Happy reading.
>http://www.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-183.html
* The overarching goal should be to make it easy to write programs that execute efficiently on highly parallel computing systems
Of course "easy" and "efficient" are opposed goals.
* The target should be 1000s of cores per chip, as these chips are built from processing elements that are the most efficient in MIPS (Million Instructions per Second) per watt, MIPS per area of silicon, and MIPS per development dollar.
MIPS per development dollar seems like an empty target. Whenever you decide on any development direction, eventually the costs come down as you learn more. R&D should always be a cost-is-no-object proposition because once you find the best solution, someone will figure out how to bring the cost inline.
* Instead of traditional benchmarks, use 13 "Dwarfs" to design and evaluate parallel programming models and architectures. (A dwarf is an algorithmic method that captures a pattern of computation and communication.)
uh huh, sure. micro-benchmarks by another name, which all happen to have a nasty habit of giving zero predictive worth once they're all combined into a full running system.
* "Autotuners" should play a larger role than conventional compilers in translating parallel programs.
These guys seem to like introspective JVMs and JIT optimizers. I always view this as a losing proposition. I can either have 100% of my CPU resources crunching a solution to my problem, or 100-N% crunching, and N% trying to dynamically re-optimize my code. Hint - write your code correctly in the first place.
* To maximize programmer productivity, future programming models must be more human-centric than the conventional focus on hardware or applications.
Kinda like what I touched on before, designing programming languages whose input tokens aren't character-based. Aside from that it's all bunk. 3000+ years ago a guy named Hercules had to clean out the Augean stables. Today mucking horse stalls is still a dirty job. That's the nature of the job.
* To be successful, programming models should be independent of the number of processors.
* To maximize application efficiency, programming models should support a wide range of data types and successful models of parallelism: task-level parallelism, word-level parallelism, and bit-level parallelism.
* Architects should not include features that significantly affect performance or energy if programmers cannot accurately measure their impact via performance counters and energy counters.
* Traditional operating systems will be deconstructed and operating system functionality will be orchestrated using libraries and virtual machines.
Which directly opposes the stated goal of efficiency.
* To explore the design space rapidly, use system emulators based on Field Programmable Gate Arrays (FPGAs) that are highly scalable and low cost.
Topic | Posted By | Date |
---|---|---|
Multicore is unlikely to be the ideal answer. | Anders Jensen | 2008/02/14 04:24 AM |
And the links.. | Anders Jensen | 2008/02/14 04:25 AM |
Disappointing.. | Linus Torvalds | 2008/02/14 10:17 AM |
Disappointing.. | Mark Roulo | 2008/02/14 11:03 AM |
LOL (NT) | Linus Torvalds | 2008/02/14 05:43 PM |
Disappointing.. | David Patterson | 2008/02/15 11:53 AM |
Disappointing.. | Linus Torvalds | 2008/02/15 05:01 PM |
Disappointing.. | anon | 2008/02/15 08:54 PM |
Disappointing.. | JasonB | 2008/02/19 07:45 PM |
Disappointing.. | Ilya Lipovsky | 2008/02/22 06:27 PM |
Disappointing.. | Scott Bolt | 2008/03/16 11:36 AM |
Need for new programming languages | Vincent Diepeveen | 2008/02/19 06:18 AM |
Need for new programming languages | Pete Wilson | 2008/02/24 11:41 AM |
Disappointing.. | Zan | 2008/02/25 10:52 PM |
Disappointing.. | Robert Myers | 2008/02/19 09:47 PM |
Disappointing.. | Fred Bosick | 2008/02/22 06:38 PM |
Disappointing.. | Robert Myers | 2008/03/01 01:17 PM |
The limits of single CPU speed are here. | John Nagle | 2008/03/14 10:55 AM |
The limits of single CPU speed are here. | Howard Chu | 2008/03/15 01:02 AM |
The limits of single CPU speed are here. | slacker | 2008/03/15 08:08 AM |
The limits of single CPU speed are here. | Howard Chu | 2008/03/17 01:47 AM |
The limits of single CPU speed are here. | slacker | 2008/03/17 10:04 AM |
And the links.. | Howard Chu | 2008/02/14 12:58 PM |
I take some of that back | Howard Chu | 2008/02/14 01:55 PM |
And the links.. | Jesper Frimann | 2008/02/14 02:02 PM |
And the links.. | Ilya Lipovsky | 2008/02/15 02:24 PM |
And the links.. | iz | 2008/02/17 10:55 AM |
And the links.. | JasonB | 2008/02/17 07:09 PM |
And the links.. | Ilya Lipovsky | 2008/02/18 01:54 PM |
And the links.. | JasonB | 2008/02/18 10:34 PM |
And the links.. | Thiago Kurovski | 2008/02/19 07:01 PM |
And the links.. | iz | 2008/02/20 10:36 AM |
And the links.. | Ilya Lipovsky | 2008/02/20 03:37 PM |
And the links.. | JasonB | 2008/02/20 06:28 PM |
And the links.. | JasonB | 2008/02/17 06:47 PM |
And the links.. | Ilya Lipovsky | 2008/02/18 02:27 PM |
And the links.. | JasonB | 2008/02/18 10:00 PM |
And the links.. | JasonB | 2008/02/19 03:14 AM |
And the links.. | Ilya Lipovsky | 2008/02/20 04:29 PM |
And the links.. | JasonB | 2008/02/20 06:14 PM |
And the links.. | Ilya Lipovsky | 2008/02/21 11:07 AM |
And the links.. | Howard Chu | 2008/02/14 01:16 PM |
And the links.. | Jukka Larja | 2008/02/15 03:00 AM |
Berkeley View on Parallelism | David Kanter | 2008/02/15 11:41 AM |
Berkeley View on Parallelism | Howard Chu | 2008/02/15 12:49 PM |
Berkeley View on Parallelism | David Kanter | 2008/02/15 03:48 PM |
Berkeley View on Parallelism | Howard Chu | 2008/02/17 05:42 PM |
Berkeley View on Parallelism | nick | 2008/02/17 09:15 PM |
Berkeley View on Parallelism | Howard Chu | 2008/02/18 04:23 PM |
Berkeley View on Parallelism | nick | 2008/02/18 10:03 PM |
Berkeley View on Parallelism | Howard Chu | 2008/02/19 01:39 AM |
Berkeley View on Parallelism | rcf | 2008/02/19 12:44 PM |
Berkeley View on Parallelism | Howard Chu | 2008/02/19 03:25 PM |
Average programmers | anon | 2008/02/18 12:40 PM |
Berkeley View on Parallelism | JasonB | 2008/02/15 08:02 PM |
Berkeley View on Parallelism | JasonB | 2008/02/15 08:02 PM |
Berkeley View on Parallelism | Dean Kent | 2008/02/15 08:07 PM |
Berkeley View on Parallelism | Ray | 2008/02/20 03:20 PM |
Berkeley View on Parallelism | JasonB | 2008/02/20 06:11 PM |
Berkeley View on Parallelism | FritzR | 2008/02/24 03:08 PM |
rubyinline, etc. | nordsieck | 2008/02/22 03:38 PM |
rubyinline, etc. | JasonB | 2008/02/23 05:53 AM |
rubyinline, etc. | nordsieck | 2008/03/02 01:40 AM |
rubyinline, etc. | Michael S | 2008/03/02 02:49 AM |
rubyinline, etc. | Dean Kent | 2008/03/02 07:41 AM |
rubyinline, etc. | Michael S | 2008/03/02 08:19 AM |
rubyinline, etc. | Dean Kent | 2008/03/02 08:30 AM |
rubyinline, etc. | JasonB | 2008/03/02 05:26 PM |
rubyinline, etc. | JasonB | 2008/03/02 06:01 PM |
rubyinline, etc. | Anonymous | 2008/03/03 02:11 AM |
rubyinline, etc. | JasonB | 2008/03/03 09:40 AM |
rubyinline, etc. | Foo_ | 2008/03/09 09:59 AM |
rubyinline, etc. | JasonB | 2008/03/10 01:12 AM |
rubyinline, etc. | Gabriele Svelto | 2008/03/10 02:22 AM |
rubyinline, etc. | JasonB | 2008/03/10 04:35 AM |
C++ for beginners | Michael S | 2008/03/10 05:16 AM |
C++ for beginners | JasonB | 2008/03/10 06:35 AM |
C++ | Michael S | 2008/03/10 04:55 AM |
rubyinline, etc. | Linus Torvalds | 2008/03/03 11:35 AM |
rubyinline, etc. | Dean Kent | 2008/03/03 02:35 PM |
rubyinline, etc. | JasonB | 2008/03/03 03:57 PM |
rubyinline, etc. | Dean Kent | 2008/03/03 08:10 PM |
rubyinline, etc. | Michael S | 2008/03/04 01:53 AM |
rubyinline, etc. | Dean Kent | 2008/03/04 07:51 AM |
rubyinline, etc. | Michael S | 2008/03/04 08:29 AM |
rubyinline, etc. | Dean Kent | 2008/03/04 08:53 AM |
rubyinline, etc. | Michael S | 2008/03/04 11:20 AM |
rubyinline, etc. | Dean Kent | 2008/03/04 02:13 PM |
read it. thanks (NT) | Michael S | 2008/03/04 04:31 PM |
efficient HLL's | Patrik Hägglund | 2008/03/04 03:34 PM |
efficient HLL's | Wes Felter | 2008/03/04 09:33 PM |
efficient HLL's | Patrik Hägglund | 2008/03/05 01:23 AM |
efficient HLL's | Michael S | 2008/03/05 02:45 AM |
efficient HLL's | Wilco | 2008/03/05 05:34 PM |
efficient HLL's | Howard Chu | 2008/03/05 07:11 PM |
efficient HLL's | Wilco | 2008/03/06 02:27 PM |
efficient HLL's | anon | 2008/03/05 08:20 AM |
And the links.. | Groo | 2008/02/17 04:28 PM |
And the links.. | Vincent Diepeveen | 2008/02/18 02:33 AM |