By: Potatoswatter (potswa_m.delete@this.c.com), November 12, 2009 6:50 pm
Room: Moderated Discussions
MoTheG (better@not.tell) on 11/12/09 wrote:
---------------------------
>? (0xe2.0x9a.0x9b@gmail.com) on 11/12/09 wrote:
>---------------------------
>>Couldn't that (at least partially) be solved by inserting a buffer between the pipeline stages? Like:
>>
>>[parser thread] ---> [buffer] ---> [compiler thread]
>
>I imagined this less strictly structured, I just though have one thread keep parsing
>all the files to the structure the compiler will use and let the compiler(s) handle
>them at the rate it or them get(s) there.
>If the parser is significantly faster than all the rest, it should pause and wait.
>I don't see why the parser should work just on demand, is the RAM that limiting?
Like I said, sometimes the parser is slower than the code generator. For example, when using recursive C++ templates.
It sounds like you guys are talking about work division and synchronization, which is important in any multithreaded program. However, on-demand, highly nondeterministic processing tends to be bad for caches, making it a bad strategy for compiler organization. You might even be better off exploiting code and data locality in a single slow thread than trying to saturate the resources of a multicore chip in a single process. If things work efficiently in a single thread, they should (though it's not a guarantee) remain so when multithreading is accomplished by multiple processes.
Really, I think most programmers are OK with breaking their source into files which are conveniently sized for both themselves and the compiler. Parallel make and precompiled headers will reliably perform pretty well.
---------------------------
>? (0xe2.0x9a.0x9b@gmail.com) on 11/12/09 wrote:
>---------------------------
>>Couldn't that (at least partially) be solved by inserting a buffer between the pipeline stages? Like:
>>
>>[parser thread] ---> [buffer] ---> [compiler thread]
>
>I imagined this less strictly structured, I just though have one thread keep parsing
>all the files to the structure the compiler will use and let the compiler(s) handle
>them at the rate it or them get(s) there.
>If the parser is significantly faster than all the rest, it should pause and wait.
>I don't see why the parser should work just on demand, is the RAM that limiting?
Like I said, sometimes the parser is slower than the code generator. For example, when using recursive C++ templates.
It sounds like you guys are talking about work division and synchronization, which is important in any multithreaded program. However, on-demand, highly nondeterministic processing tends to be bad for caches, making it a bad strategy for compiler organization. You might even be better off exploiting code and data locality in a single slow thread than trying to saturate the resources of a multicore chip in a single process. If things work efficiently in a single thread, they should (though it's not a guarantee) remain so when multithreading is accomplished by multiple processes.
Really, I think most programmers are OK with breaking their source into files which are conveniently sized for both themselves and the compiler. Parallel make and precompiled headers will reliably perform pretty well.
Topic | Posted By | Date |
---|---|---|
Article: Computational Efficiency in Modern Processors by DK | MoTheG | 2009/11/08 07:02 AM |
Article: Computational Efficiency in Modern Processors by DK | none | 2009/11/08 07:15 AM |
Silverthorne and OoO vs. InOrd | MoTheG | 2009/11/08 07:22 AM |
Silverthorne and OoO vs. InOrd | David Kanter | 2009/11/08 04:11 PM |
Magical 100x speedups | AM | 2009/11/09 09:03 AM |
Magical 100x speedups | David Kanter | 2009/11/09 12:41 PM |
Magical 100x speedups | none | 2009/11/09 01:36 PM |
Magical speedups | David Kanter | 2009/11/09 03:24 PM |
Magical speedups | none | 2009/11/09 03:40 PM |
Hardware Specs | MS | 2009/11/09 05:49 PM |
44x faster than a single cpu core | Vincent Diepeveen | 2009/11/10 08:17 AM |
Magical speedups | Vincent Diepeveen | 2009/11/10 08:02 AM |
Xeon 130x speedup vs Xeon | Eric Bron | 2009/11/10 08:20 AM |
Magical 100x speedups | AM | 2009/11/10 10:42 AM |
Magical 100x speedups | Linus Torvalds | 2009/11/10 01:19 PM |
Mega speedups | AM | 2009/11/11 06:21 AM |
Bogus 100x speedups | David Kanter | 2009/11/10 01:26 AM |
No speedups for CPUs for the general programming populace | MoTheG | 2009/11/10 05:26 AM |
Bogus 100x speedups | ? | 2009/11/10 05:45 AM |
Bogus 100x speedups | hobold | 2009/11/10 07:31 AM |
Bogus 100x speedups | Vincent Diepeveen | 2009/11/10 08:26 AM |
Bogus 100x speedups | sylt | 2009/11/10 10:00 AM |
Bogus 100x speedups | AM | 2009/11/10 10:47 AM |
GPU vs. CPU | MoTheG | 2009/11/09 11:30 AM |
GPU vs. CPU | a reader | 2009/11/09 07:58 PM |
ease of programming | MoTheG | 2009/11/09 11:45 PM |
yes for GPU programming you need non-public info | Vincent Diepeveen | 2009/11/10 08:36 AM |
yes for GPU programming you need non-public info | Potatoswatter | 2009/11/11 08:06 AM |
yes for GPU programming you need non-public info | Vincent Diepeveen | 2009/11/11 11:23 AM |
yes for GPU programming you need non-public info | Potatoswatter | 2009/11/11 01:26 PM |
Real businesses use GPGPU. | Jouni Osmala | 2009/11/11 11:00 PM |
GPU vs. CPU | ? | 2009/11/10 06:01 AM |
2. try but most is said, just clarifying | MoTheG | 2009/11/10 10:24 AM |
2. try but most is said, just clarifying | ? | 2009/11/11 01:11 AM |
you missread me | MoTheG | 2009/11/12 12:33 AM |
you missread me | ? | 2009/11/12 01:18 AM |
2. try but most is said, just clarifying | Potatoswatter | 2009/11/11 08:22 AM |
2. try but most is said, just clarifying | ? | 2009/11/12 01:22 AM |
loose, not so orderly | MoTheG | 2009/11/12 12:47 PM |
loose, not so orderly | Potatoswatter | 2009/11/12 06:50 PM |
2. try but most is said, just clarifying | rwessel | 2009/11/12 01:01 PM |
2. try but most is said, just clarifying | Gabriele Svelto | 2009/11/13 12:39 AM |
2. try but most is said, just clarifying | ? | 2009/11/13 01:14 AM |
2. try but most is said, just clarifying | Gabriele Svelto | 2009/11/13 01:30 AM |
2. try but most is said, just clarifying | rwessel | 2009/11/13 01:24 PM |
2. try but most is said, just clarifying | Michael S | 2009/11/14 01:08 PM |
2. try but most is said, just clarifying | Gabriele Svelto | 2009/11/14 11:38 PM |
2. try but most is said, just clarifying | Andi Kleen | 2009/11/15 01:19 AM |
2. try but most is said, just clarifying | Michael S | 2009/11/15 01:58 AM |
2. try but most is said, just clarifying | Eric Bron | 2009/11/15 02:25 AM |
/MP option | Eric Bron | 2009/11/15 02:33 AM |
/MP option | Paul | 2009/11/15 09:42 AM |
/MP option | Eric Bron | 2009/11/15 01:22 PM |
2. try but most is said, just clarifying | ? | 2009/11/15 03:13 AM |
2. try but most is said, just clarifying | Michael S | 2009/11/15 05:14 AM |
2. try but most is said, just clarifying | Eugene Nalimov | 2009/11/14 09:24 PM |
Atom point | AM | 2009/11/09 09:00 AM |
Atom TDP | David Kanter | 2009/11/09 12:48 PM |
Atom TDP | hobold | 2009/11/10 07:41 AM |
Atom TDP | AM | 2009/11/10 10:49 AM |