Berkeley View on Parallelism

By: David Kanter (, February 15, 2008 10:41 am
Room: Moderated Discussions
Howard Chu ( on 2/14/08 wrote:
>Anders Jensen (@.) on 2/14/08 wrote:
>>Anders Jensen (@.) on 2/14/08 wrote:
>>>Quoted from the white paper "The Landscape of Parallel Computing Research: A View from Berkeley".
>>>This paper just gave Berkeley $10M over 5 years from MS and Intel to research the
>>>future of parallel computing. Happy reading.
>* The overarching goal should be to make it easy to write programs that execute
>efficiently on highly parallel computing systems
>Of course "easy" and "efficient" are opposed goals.

Of course, but python is a reasonably efficient language and it is easy. I think the idea is to provide a parallel language that is simple and easy for the app programmers, and then system developers can use C or C++ or whatever they want.

>* Instead of traditional benchmarks, use 13 "Dwarfs" to >design and evaluate
>parallel programming models and architectures. (A dwarf is >an algorithmic method
>that captures a pattern of computation and communication.)
>uh huh, sure. micro-benchmarks by another name, which all >happen to have a nasty
>habit of giving zero predictive worth once they're all >combined into a full running system.

I think there are several issues with the dwarf approach. As you pointed out, they are not full applications. Patterson's group has previously done some bad work as a result of using microbenchmarks that do not reflect real workloads (register windows). Of course, that was 10-20 years ago, and this is a totally different set of students - I'm trusting they won't make that mistakes.

While the 13 dwarfs *may* be representative of workloads in the future, they currently aren't all that commonly used and don't reflect current applications. The big problem is that almost every bit of computer architecture research has shown that outside of a few niche markets, you cannot force people to rewrite their software. Sure the smart guys will, but I remember working for Boeing in 2000, and they still had to use DOS to run some mission critical applications. Inertia is a bitch.

I'm also not 100% sure how the dwarves were chosen - was it on the basis of being parallel themselves, or was it because people are genuinely interested in them.

>* "Autotuners" should play a larger role than conventional compilers in translating parallel programs.
>These guys seem to like introspective JVMs and JIT optimizers. I always view this
>as a losing proposition. I can either have 100% of my CPU resources crunching a
>solution to my problem, or 100-N% crunching, and N% trying to dynamically re-optimize
>my code. Hint - write your code correctly in the first >place.

This is one place where I think they are spot on. Parallel programming is obscenely expensive, and we need to drive the cost down in order to get more out of future MPUs. One way to do that is to ensure that a lot of parallelism is automatically extracted.

Once upon a time, people thought compilers were stupid. But once they were able to get 80-90% of the performance of a human coding assembly, they became pretty damn popular (that and when architectures stopped being programmer friendly).

I think there's a clear trend towards HLL for applications programmers. I don't see why this would stop for parallelism.

>* To maximize programmer productivity, future programming >models must be more
>human-centric than the conventional focus on hardware or >applications.
>Kinda like what I touched on before, designing programming >languages whose input
>tokens aren't character-based. Aside from that it's all >bunk. 3000+ years ago a
>guy named Hercules had to clean out the Augean stables. >Today mucking horse stalls
>is still a dirty job. That's the nature of the job.

Some things are easier in python than in C though...

>* To be successful, programming models should be independent of the number of processors.
>* To maximize application efficiency, programming models should support a wide
>range of data types and successful models of parallelism: task-level parallelism,
>word-level parallelism, and bit-level parallelism.

What about instruction level parallelism? And TLP...

>* Architects should not include features that significantly affect performance
>or energy if programmers cannot accurately measure their >impact via performance counters and energy counters.

This sounds like a good idea. Energy counters is definitely an interesting one.


>* To explore the design space rapidly, use system emulators >based on Field
>Programmable Gate Arrays (FPGAs) that are highly scalable >and low cost.

Ah, someone is still pimping RAMP, I see.

< Previous Post in ThreadNext Post in Thread >
TopicPosted ByDate
Multicore is unlikely to be the ideal answer.Anders Jensen2008/02/14 03:24 AM
  And the links..Anders Jensen2008/02/14 03:25 AM
    Disappointing..Linus Torvalds2008/02/14 09:17 AM
      Disappointing..Mark Roulo2008/02/14 10:03 AM
        LOL (NT)Linus Torvalds2008/02/14 04:43 PM
      Disappointing..David Patterson2008/02/15 10:53 AM
        Disappointing..Linus Torvalds2008/02/15 04:01 PM
          Disappointing..anon2008/02/15 07:54 PM
            Disappointing..JasonB2008/02/19 06:45 PM
          Disappointing..Ilya Lipovsky2008/02/22 05:27 PM
          Disappointing..Scott Bolt2008/03/16 10:36 AM
        Need for new programming languagesVincent Diepeveen2008/02/19 05:18 AM
          Need for new programming languagesPete Wilson2008/02/24 10:41 AM
        Disappointing..Zan2008/02/25 09:52 PM
      Disappointing..Robert Myers2008/02/19 08:47 PM
        Disappointing..Fred Bosick2008/02/22 05:38 PM
          Disappointing..Robert Myers2008/03/01 12:17 PM
        The limits of single CPU speed are here.John Nagle2008/03/14 09:55 AM
          The limits of single CPU speed are here.Howard Chu2008/03/15 12:02 AM
            The limits of single CPU speed are here.slacker2008/03/15 07:08 AM
              The limits of single CPU speed are here.Howard Chu2008/03/17 12:47 AM
                The limits of single CPU speed are here.slacker2008/03/17 09:04 AM
    And the links..Howard Chu2008/02/14 11:58 AM
      I take some of that backHoward Chu2008/02/14 12:55 PM
      And the links..Jesper Frimann2008/02/14 01:02 PM
      And the links..Ilya Lipovsky2008/02/15 01:24 PM
        And the links..iz2008/02/17 09:55 AM
          And the links..JasonB2008/02/17 06:09 PM
            And the links..Ilya Lipovsky2008/02/18 12:54 PM
              And the links..JasonB2008/02/18 09:34 PM
                And the links..Thiago Kurovski2008/02/19 06:01 PM
                  And the links..iz2008/02/20 09:36 AM
                And the links..Ilya Lipovsky2008/02/20 02:37 PM
                  And the links..JasonB2008/02/20 05:28 PM
        And the links..JasonB2008/02/17 05:47 PM
          And the links..Ilya Lipovsky2008/02/18 01:27 PM
            And the links..JasonB2008/02/18 09:00 PM
              And the links..JasonB2008/02/19 02:14 AM
              And the links..Ilya Lipovsky2008/02/20 03:29 PM
                And the links..JasonB2008/02/20 05:14 PM
                  And the links..Ilya Lipovsky2008/02/21 10:07 AM
    And the links..Howard Chu2008/02/14 12:16 PM
      And the links..Jukka Larja2008/02/15 02:00 AM
      Berkeley View on ParallelismDavid Kanter2008/02/15 10:41 AM
        Berkeley View on ParallelismHoward Chu2008/02/15 11:49 AM
          Berkeley View on ParallelismDavid Kanter2008/02/15 02:48 PM
            Berkeley View on ParallelismHoward Chu2008/02/17 04:42 PM
              Berkeley View on Parallelismnick2008/02/17 08:15 PM
                Berkeley View on ParallelismHoward Chu2008/02/18 03:23 PM
                  Berkeley View on Parallelismnick2008/02/18 09:03 PM
                    Berkeley View on ParallelismHoward Chu2008/02/19 12:39 AM
                  Berkeley View on Parallelismrcf2008/02/19 11:44 AM
                    Berkeley View on ParallelismHoward Chu2008/02/19 02:25 PM
              Average programmersanon2008/02/18 11:40 AM
        Berkeley View on ParallelismJasonB2008/02/15 07:02 PM
        Berkeley View on ParallelismJasonB2008/02/15 07:02 PM
          Berkeley View on ParallelismDean Kent2008/02/15 07:07 PM
          Berkeley View on ParallelismRay2008/02/20 02:20 PM
            Berkeley View on ParallelismJasonB2008/02/20 05:11 PM
              Berkeley View on ParallelismFritzR2008/02/24 02:08 PM
          rubyinline, etc.nordsieck2008/02/22 02:38 PM
            rubyinline, etc.JasonB2008/02/23 04:53 AM
              rubyinline, etc.nordsieck2008/03/02 12:40 AM
                rubyinline, etc.Michael S2008/03/02 01:49 AM
                  rubyinline, etc.Dean Kent2008/03/02 06:41 AM
                    rubyinline, etc.Michael S2008/03/02 07:19 AM
                      rubyinline, etc.Dean Kent2008/03/02 07:30 AM
                        rubyinline, etc.JasonB2008/03/02 04:26 PM
                rubyinline, etc.JasonB2008/03/02 05:01 PM
                  rubyinline, etc.Anonymous2008/03/03 01:11 AM
                    rubyinline, etc.JasonB2008/03/03 08:40 AM
                      rubyinline, etc.Foo_2008/03/09 08:59 AM
                        rubyinline, etc.JasonB2008/03/10 12:12 AM
                        rubyinline, etc.Gabriele Svelto2008/03/10 01:22 AM
                          rubyinline, etc.JasonB2008/03/10 03:35 AM
                            C++ for beginnersMichael S2008/03/10 04:16 AM
                              C++ for beginnersJasonB2008/03/10 05:35 AM
                          C++Michael S2008/03/10 03:55 AM
                rubyinline, etc.Linus Torvalds2008/03/03 10:35 AM
                  rubyinline, etc.Dean Kent2008/03/03 01:35 PM
                    rubyinline, etc.JasonB2008/03/03 02:57 PM
                      rubyinline, etc.Dean Kent2008/03/03 07:10 PM
                        rubyinline, etc.Michael S2008/03/04 12:53 AM
                          rubyinline, etc.Dean Kent2008/03/04 06:51 AM
                            rubyinline, etc.Michael S2008/03/04 07:29 AM
                              rubyinline, etc.Dean Kent2008/03/04 07:53 AM
                                rubyinline, etc.Michael S2008/03/04 10:20 AM
                                  rubyinline, etc.Dean Kent2008/03/04 01:13 PM
                                    read it. thanks (NT)Michael S2008/03/04 03:31 PM
                  efficient HLL'sPatrik Hägglund2008/03/04 02:34 PM
                    efficient HLL'sWes Felter2008/03/04 08:33 PM
                      efficient HLL'sPatrik Hägglund2008/03/05 12:23 AM
                        efficient HLL'sMichael S2008/03/05 01:45 AM
                          efficient HLL'sWilco2008/03/05 04:34 PM
                            efficient HLL'sHoward Chu2008/03/05 06:11 PM
                              efficient HLL'sWilco2008/03/06 01:27 PM
                    efficient HLL'sanon2008/03/05 07:20 AM
      And the links..Groo2008/02/17 03:28 PM
        And the links..Vincent Diepeveen2008/02/18 01:33 AM
Reply to this Topic
Body: No Text
How do you spell tangerine? 🍊