By: Pete Wilson (pete.delete@this.kivadesigngroupe.com), February 24, 2008 10:41 am
Room: Moderated Discussions
Vincent Diepeveen (diep@xs4all.nl) on 2/19/08 wrote:
---------------------------
>David Patterson (pattrsn@cs.berkeley.edu) on 2/15/08 wrote:
>---------------------------
..snip
>>The challenge for this next generation of software to be correct, efficient, and
>>scale with the increasing number of processors, without overburdening programmers.
>>If we as field can succeed at this amazingly difficult challenge, the future looks
>>good. If not, then performance increases we have relied upon for decades will come
>>to an abrupt halt, likely dimishing the future of the IT industry.
>>
>>Dave Patteron, UC Berkeley
There are many different sorts of computing problems. The one most folk seem to expend most effort on is some variant of the computing workload of the PC/workstation/computer, perhaps best (or at least simply) characterised as "Microsoft software" (or "Linux software". The overarching characteristic of this stuff is that it's terribly hard to change (for some mix of a large number of reasons), and that there is a "software product" available separately from a "hardware product" (which leads to the software being designed to be multi-hostable for economic reasons). So lowest-common denominator sets in.
But there's an enormous set of other problems in which the software is part of the product. This is the embedded space, and in many, many areas the space is dominated by cost. If you can knock 10% of the cost of an SoC by rewriting the software, then begod folk will take seriously the necessary rewrite (they won't do it immediately for all the sensible reasons, including "oh? show me it works" and regulatory approval (for example in automotive systems)).
So how could we make embedded chips cheaper? It turns out that even at the el-cheapo end of processor design, a 2- or 3-stage in-order pipeline is quite a lot smaller and lower in power than a 7-stage in-order pipeline. And has much higher ipc, and no need for complex branch-prediction schemes. So you can lower the cost of some SoCs by replacing their single 500 MHz in-order processor with some number of smaller, simpler 250 Mhz processors - and get lower area and lower power and therefore cheaper packaging. And many embedded systems include special-purpose processors of various forms - it'd be nice to be able to swap these in and out to build products at varying price-performance points sans re-engineering the software.
But then you need to program it, and although 'designing and writing parallel code' is something done by tens of thousands of engineers (Verilog is a parallel programming language, albeit nasty, for a particular class of problems) there just aren't any good abstractions-and-compilers available for embedded systems.
So back when I was at Freescale, we took a whack at putting together an simple experimental language that could be of interest. It's 'Plasma', a simple extension to C (and C++) and it's open source. The simplest sound-bite description is "up-to-date occam", if by up to date we mean "does garbage collection and doesn't surprise people who think in C". It comes with examples and wild claims of universal applicability. This implementation is intended to allow the language to be played with, NOT to program lotsa cores - for simplicity, the compiler generates code for a thread library. For industrial deployment, you'd have to build an industrial-strength compiler. But this is fairly simple to change and use.
Look at http://opensource.freescale.com/fsl-oss-projects/
Note that having a language doesn't mean that there is a hardware platform. There are still hard problems to solve at the system and processor architecture levels.
-- Pete
---------------------------
>David Patterson (pattrsn@cs.berkeley.edu) on 2/15/08 wrote:
>---------------------------
..snip
>>The challenge for this next generation of software to be correct, efficient, and
>>scale with the increasing number of processors, without overburdening programmers.
>>If we as field can succeed at this amazingly difficult challenge, the future looks
>>good. If not, then performance increases we have relied upon for decades will come
>>to an abrupt halt, likely dimishing the future of the IT industry.
>>
>>Dave Patteron, UC Berkeley
There are many different sorts of computing problems. The one most folk seem to expend most effort on is some variant of the computing workload of the PC/workstation/computer, perhaps best (or at least simply) characterised as "Microsoft software" (or "Linux software". The overarching characteristic of this stuff is that it's terribly hard to change (for some mix of a large number of reasons), and that there is a "software product" available separately from a "hardware product" (which leads to the software being designed to be multi-hostable for economic reasons). So lowest-common denominator sets in.
But there's an enormous set of other problems in which the software is part of the product. This is the embedded space, and in many, many areas the space is dominated by cost. If you can knock 10% of the cost of an SoC by rewriting the software, then begod folk will take seriously the necessary rewrite (they won't do it immediately for all the sensible reasons, including "oh? show me it works" and regulatory approval (for example in automotive systems)).
So how could we make embedded chips cheaper? It turns out that even at the el-cheapo end of processor design, a 2- or 3-stage in-order pipeline is quite a lot smaller and lower in power than a 7-stage in-order pipeline. And has much higher ipc, and no need for complex branch-prediction schemes. So you can lower the cost of some SoCs by replacing their single 500 MHz in-order processor with some number of smaller, simpler 250 Mhz processors - and get lower area and lower power and therefore cheaper packaging. And many embedded systems include special-purpose processors of various forms - it'd be nice to be able to swap these in and out to build products at varying price-performance points sans re-engineering the software.
But then you need to program it, and although 'designing and writing parallel code' is something done by tens of thousands of engineers (Verilog is a parallel programming language, albeit nasty, for a particular class of problems) there just aren't any good abstractions-and-compilers available for embedded systems.
So back when I was at Freescale, we took a whack at putting together an simple experimental language that could be of interest. It's 'Plasma', a simple extension to C (and C++) and it's open source. The simplest sound-bite description is "up-to-date occam", if by up to date we mean "does garbage collection and doesn't surprise people who think in C". It comes with examples and wild claims of universal applicability. This implementation is intended to allow the language to be played with, NOT to program lotsa cores - for simplicity, the compiler generates code for a thread library. For industrial deployment, you'd have to build an industrial-strength compiler. But this is fairly simple to change and use.
Look at http://opensource.freescale.com/fsl-oss-projects/
Note that having a language doesn't mean that there is a hardware platform. There are still hard problems to solve at the system and processor architecture levels.
-- Pete
Topic | Posted By | Date |
---|---|---|
Multicore is unlikely to be the ideal answer. | Anders Jensen | 2008/02/14 03:24 AM |
And the links.. | Anders Jensen | 2008/02/14 03:25 AM |
Disappointing.. | Linus Torvalds | 2008/02/14 09:17 AM |
Disappointing.. | Mark Roulo | 2008/02/14 10:03 AM |
LOL (NT) | Linus Torvalds | 2008/02/14 04:43 PM |
Disappointing.. | David Patterson | 2008/02/15 10:53 AM |
Disappointing.. | Linus Torvalds | 2008/02/15 04:01 PM |
Disappointing.. | anon | 2008/02/15 07:54 PM |
Disappointing.. | JasonB | 2008/02/19 06:45 PM |
Disappointing.. | Ilya Lipovsky | 2008/02/22 05:27 PM |
Disappointing.. | Scott Bolt | 2008/03/16 10:36 AM |
Need for new programming languages | Vincent Diepeveen | 2008/02/19 05:18 AM |
Need for new programming languages | Pete Wilson | 2008/02/24 10:41 AM |
Disappointing.. | Zan | 2008/02/25 09:52 PM |
Disappointing.. | Robert Myers | 2008/02/19 08:47 PM |
Disappointing.. | Fred Bosick | 2008/02/22 05:38 PM |
Disappointing.. | Robert Myers | 2008/03/01 12:17 PM |
The limits of single CPU speed are here. | John Nagle | 2008/03/14 09:55 AM |
The limits of single CPU speed are here. | Howard Chu | 2008/03/15 12:02 AM |
The limits of single CPU speed are here. | slacker | 2008/03/15 07:08 AM |
The limits of single CPU speed are here. | Howard Chu | 2008/03/17 12:47 AM |
The limits of single CPU speed are here. | slacker | 2008/03/17 09:04 AM |
And the links.. | Howard Chu | 2008/02/14 11:58 AM |
I take some of that back | Howard Chu | 2008/02/14 12:55 PM |
And the links.. | Jesper Frimann | 2008/02/14 01:02 PM |
And the links.. | Ilya Lipovsky | 2008/02/15 01:24 PM |
And the links.. | iz | 2008/02/17 09:55 AM |
And the links.. | JasonB | 2008/02/17 06:09 PM |
And the links.. | Ilya Lipovsky | 2008/02/18 12:54 PM |
And the links.. | JasonB | 2008/02/18 09:34 PM |
And the links.. | Thiago Kurovski | 2008/02/19 06:01 PM |
And the links.. | iz | 2008/02/20 09:36 AM |
And the links.. | Ilya Lipovsky | 2008/02/20 02:37 PM |
And the links.. | JasonB | 2008/02/20 05:28 PM |
And the links.. | JasonB | 2008/02/17 05:47 PM |
And the links.. | Ilya Lipovsky | 2008/02/18 01:27 PM |
And the links.. | JasonB | 2008/02/18 09:00 PM |
And the links.. | JasonB | 2008/02/19 02:14 AM |
And the links.. | Ilya Lipovsky | 2008/02/20 03:29 PM |
And the links.. | JasonB | 2008/02/20 05:14 PM |
And the links.. | Ilya Lipovsky | 2008/02/21 10:07 AM |
And the links.. | Howard Chu | 2008/02/14 12:16 PM |
And the links.. | Jukka Larja | 2008/02/15 02:00 AM |
Berkeley View on Parallelism | David Kanter | 2008/02/15 10:41 AM |
Berkeley View on Parallelism | Howard Chu | 2008/02/15 11:49 AM |
Berkeley View on Parallelism | David Kanter | 2008/02/15 02:48 PM |
Berkeley View on Parallelism | Howard Chu | 2008/02/17 04:42 PM |
Berkeley View on Parallelism | nick | 2008/02/17 08:15 PM |
Berkeley View on Parallelism | Howard Chu | 2008/02/18 03:23 PM |
Berkeley View on Parallelism | nick | 2008/02/18 09:03 PM |
Berkeley View on Parallelism | Howard Chu | 2008/02/19 12:39 AM |
Berkeley View on Parallelism | rcf | 2008/02/19 11:44 AM |
Berkeley View on Parallelism | Howard Chu | 2008/02/19 02:25 PM |
Average programmers | anon | 2008/02/18 11:40 AM |
Berkeley View on Parallelism | JasonB | 2008/02/15 07:02 PM |
Berkeley View on Parallelism | JasonB | 2008/02/15 07:02 PM |
Berkeley View on Parallelism | Dean Kent | 2008/02/15 07:07 PM |
Berkeley View on Parallelism | Ray | 2008/02/20 02:20 PM |
Berkeley View on Parallelism | JasonB | 2008/02/20 05:11 PM |
Berkeley View on Parallelism | FritzR | 2008/02/24 02:08 PM |
rubyinline, etc. | nordsieck | 2008/02/22 02:38 PM |
rubyinline, etc. | JasonB | 2008/02/23 04:53 AM |
rubyinline, etc. | nordsieck | 2008/03/02 12:40 AM |
rubyinline, etc. | Michael S | 2008/03/02 01:49 AM |
rubyinline, etc. | Dean Kent | 2008/03/02 06:41 AM |
rubyinline, etc. | Michael S | 2008/03/02 07:19 AM |
rubyinline, etc. | Dean Kent | 2008/03/02 07:30 AM |
rubyinline, etc. | JasonB | 2008/03/02 04:26 PM |
rubyinline, etc. | JasonB | 2008/03/02 05:01 PM |
rubyinline, etc. | Anonymous | 2008/03/03 01:11 AM |
rubyinline, etc. | JasonB | 2008/03/03 08:40 AM |
rubyinline, etc. | Foo_ | 2008/03/09 08:59 AM |
rubyinline, etc. | JasonB | 2008/03/10 12:12 AM |
rubyinline, etc. | Gabriele Svelto | 2008/03/10 01:22 AM |
rubyinline, etc. | JasonB | 2008/03/10 03:35 AM |
C++ for beginners | Michael S | 2008/03/10 04:16 AM |
C++ for beginners | JasonB | 2008/03/10 05:35 AM |
C++ | Michael S | 2008/03/10 03:55 AM |
rubyinline, etc. | Linus Torvalds | 2008/03/03 10:35 AM |
rubyinline, etc. | Dean Kent | 2008/03/03 01:35 PM |
rubyinline, etc. | JasonB | 2008/03/03 02:57 PM |
rubyinline, etc. | Dean Kent | 2008/03/03 07:10 PM |
rubyinline, etc. | Michael S | 2008/03/04 12:53 AM |
rubyinline, etc. | Dean Kent | 2008/03/04 06:51 AM |
rubyinline, etc. | Michael S | 2008/03/04 07:29 AM |
rubyinline, etc. | Dean Kent | 2008/03/04 07:53 AM |
rubyinline, etc. | Michael S | 2008/03/04 10:20 AM |
rubyinline, etc. | Dean Kent | 2008/03/04 01:13 PM |
read it. thanks (NT) | Michael S | 2008/03/04 03:31 PM |
efficient HLL's | Patrik Hägglund | 2008/03/04 02:34 PM |
efficient HLL's | Wes Felter | 2008/03/04 08:33 PM |
efficient HLL's | Patrik Hägglund | 2008/03/05 12:23 AM |
efficient HLL's | Michael S | 2008/03/05 01:45 AM |
efficient HLL's | Wilco | 2008/03/05 04:34 PM |
efficient HLL's | Howard Chu | 2008/03/05 06:11 PM |
efficient HLL's | Wilco | 2008/03/06 01:27 PM |
efficient HLL's | anon | 2008/03/05 07:20 AM |
And the links.. | Groo | 2008/02/17 03:28 PM |
And the links.. | Vincent Diepeveen | 2008/02/18 01:33 AM |