CPU+GPU

Article: Parallelism at HotPar 2010
By: Anon (no.delete@this.email.com), July 30, 2010 9:11 pm
Room: Moderated Discussions
David Kanter (dkanter@realworldtech.com) on 7/29/10 wrote:
---------------------------

Sorry for the slow reply here David, I often have very restricted access to external networks (and very restricted allowance as to what I can discuss even when I do have access) so am only just seeing this.

>[snip]
>
>>>>I have two discussion questions for the section "The Limits of GPUs"
>>>>
>>>>Firstly, a lot of this content seem to run along the lines of 'untuned Intel xxx
>>>>code was a lot slower than the GPU, but then we spent a lot of time rewriting/tuning
>>>>the CPu code, and it got faster!' however no mention seems >to be made of similar tuning efforts in GPU code.
>>>
>>>Well I expect for most of the GPU vs. CPU comparisons made by NV, there was plenty
>>>of GPU tuning. I trust their marketing department to do a good job. Ditto for ATI if they were in that game.
>>
>>We are not looking here at figures of an NVidia tuned >implementation versus another
>>tuned implementation though are we - they is mention of >strong tuning efforts on
>>the CPU side, and no mention on the GPU side, that is all >I am raising. Please do
>>not take it that I believe or support all the NVidia hype.
>
>If I understand what you are saying correctly, what you are interested in is a
>maximally tuned GPU vs. maximally tuned CPU comparison, is that right?

Only from the point of a valid comparison, as I am sure you appreciate it is next to impossible to gain good information from two different systems running two very different implementations of something, and a (possibly impossible) 'maximal effort' at optimisation is probably a base requirement for any such comparison, would you not agree? History is littered with cases of people (and I certainly dont exclude NVidia here) not doing this, and claiming missleading outcomes.


>>>>I think most of the people involved in GPU Cuda programming will agree that it
>>>>is significantly HARDER to extract full potential from GPU code, although for 'suitable'
>>>>codes the gains are even larger - this looks/feels like >one of these cases.... highly
>>>>tuned CPU code versus basic GPU code.
>>>
>>>I'm not really sure how true that is. If you look at some of the presentations
>>>out there, it's clear that even on Nehalem - which is one of the most well rounded
>>>CPUs out there, you can see a 25X improvement from tuning.
>>>
>>>That's a pretty big factor.
>>>
>>>How much variation is there once you've written an >>algorithm in CUDA?
>>
>>I have seen factors of well over a hundred in >restructurings of the same code to
>>more closely match the GPUs needs (often to do with memory >access patterns). As
>>you say Nehalem is a well rounded CPU, I would not >consider any current GPU nearly
>>as well rounded - they are highly optimisation sentitive, >and the optimisations
>>required are more difficult, not less (often due to the >lack of tools for suitably detailed profiling, etc)
>
>Can you give some examples here of the performance improvement due to changes in the coding of an algorithm?

Two very key areas are use of specific hardware features (we have a few applications that love odd features of GPUs texture hardware..), and organisation of memory accesses (look at the long distory of DCT optimisation for great examples of this) to suit memory/cache requirements. The first rounds of these can sometimes be done without base algorith changes, however often major rethough and reimplementation can give large results.

>I think one of the issues with GPUs is that proponents of CUDA really try and portray
>it as an easy development environment. I think it's obvious that CUDA is far ahead
>of everyone else, but based on what you're saying 'easy' is not the right adjective at all.

Could I be allowed a 'some proponents' there? i think very few people who are actually using them in a real way for any length of time would claim that. The same of course applies to a lot of other high performance areas (CUDA is a lot easier in many ways than some of the FPGA stuff we do for example, or the good old CM days..)

>>>NV marketing keeps on saying how CUDA makes programming much easier, yet it sounds
>>>like you are saying it really isn't good enough.
>>
>>Do they? that must be something I keep missing - they like >to blow their trumpet
>>about good outcomes, and over generalise these, and also >talk about peak rates too
>>much (as everyone does), but do they really say its easy?
>Yes they do, often. It tends to be in verbal conversation rather than in slides.

There is always marketing hype to wade through, especially in HPC areas, this is certainly not new and I think most of those involved know how to filter it - no excuse for it at all I agree, but it should also not be used to claim the opposite (that there are no advantages).

>>It has certainly become a lot EASIER, due to Cuda and >OpenCL, however I dont think
>>anyone would call it easy, just look at NVidias own >examples.. they are rarely simple,
>>even though they often deal with quite trivial codes..
>
>[snip]
>
>>>>It is also interesting (and the full information is not presented) that in the
>>>>second group of cases, we seems to be comparing DP codes on Tesla C1060, rather than the most certanly current 2070.
>>>>Now, a C1060 has around an 8:1 SP:DP ratio. The 2060 closer to 2:1, and nearly
>>>>7 TIMES the peak DP with a single GPU than the 1060... I do not doubt that the Nehalem
>>>>system is not the fastest current either, however I doubt a system 7 times faster could be found.
>>>>Secondly in this case, the codes being looked at are, as NVidia appears to have
>>>>pointed out, not really prime targets of their systems >anyway (any yet their OLD systems do pretty well).
>>>>
>>>>Now, this could be seen as valid 'limits' of GPUs.
>>>>1 - Older implementations (and some current) are not great >at DP.
>>>>2 - GPUs are very optimisation sensitive (tools are quite new, and they are not that flexible compute devices)
>>>>3 - GPUs performance varies strongly, not all target >applications are suitable.
>>>
>>>I think the overall moral of the story is that if you see a performance gap of
>>>>4-5X between a CPU and GPU, you should look closely at the code (and the hardware
>>>too). GPUs are fast on the right problems, but they should not be 10X faster.
>>>Especially on bandwidth bound problems where the gap narrows considerably.
>>
>>Are you missing that fact that the 2060 has 7 TIMES the DP >capability of the 1060
>>that they used? the moral of that story seems to be >selective choice of benchmarks, or is it a historical >piece?
>
>Not at all.

Not sure which part is not at all, does the impact of an approximately 7* difference not register here, and thats before considering the much better caching, etc?

>>GPU can well be 10X faster, or more in some cases, but >anyone who tries to over-gernalise
>>that is most likely being foolish, drinking the koolaid, >or does not understand.
>
>The only scenario I can see where a GPU would be >10X faster (and both CPU and
>GPU are coded well) would be where there is heavy dependence on operations which
>are slow on the CPU, but fast on the CPU.

Exactly! those cases both exist, and in some cases are quite important.. the GPU is not a general purpose device, it is a highly tuned resource, great at some things, and terrible at others.
Just ask Oracle ;)
A GPU is much more than a cluster of odd processors, it is a cluster of odd processors with some very specific but very high performance hardware support glued on the side..
The 'trick' is to find ways to use all their features, especially the unusual ones.


>I'm not sure which operations fall into that category, but perhaps some things
>like divides or square roots may be faster on a GPU.
>
>Even the fastest GPU today has ~170GB/s of memory bandwidth, which is only 4X more
>than magny-cours and 5.5X more than Nehalem-EP.

Quite, like most systems making good use of the (never enough..) resources is very important. Most of our applications are not memory limited, others will be..

>
>>In fact, for certain interesting cases, where their more >specialised hardware can
>>be used, they can be well over 10X faster, however these >cases are quite specific.
>
>Yes, I'd agree.
>
>>My queries on their final page of this article was more about the fact that it
>>seemed to pick some rather skewed cases which seemed shall we say 'selected for a purpose'.
>>It is not difficult to find code that performs terrible on >the GPU, very terribly
>>in cases.. but equally there is code that performs >fantastically, GPUs are just
>>not a generalised CPU, and (other than a few one-liners >from marketing departments)
>>I dont really think anyone claims they are.
>
>I totally agree. GPUs are suitable for very specific sorts of workloads, where
>they fit, they tend to provide very good performance. However, if you fall outside that region, it's often very ugly.
>
>But here's an example of a rather unrealistic and glowing portrayal of the GPU:
>http://www.hpcwire.com/features/Kudos-for-CUDA-97889444.html
>
>"Contrary to the accepted wisdom that GPU computing is more difficult, I believe
>its success thus far signals that it is no more complicated than good CPU programming."
>
>That contradicts what you said before.

I guess it depends on peoples particular problems, how much performance they are looking for (faster? on par?), and what systems they are used to developing for.
Developing a HP system near the bleeding edge is very very difficult, on just about any system.

>The article also implies that somehow GPU programming (and CUDA specifically) replaces
>SSE+OpenMP+MPI, which is just a total load of BS. A GPU can avoid the need for
>SSE, and I think you can make the argument that CUDA is more elegant than AVX/SSE.
>But the whole point of OpenMP is shared memory communication, which isn't feasible
>on GPUs - that's not a feature! And you still need MPI for most problems.

I would disagree with you, for the reason that a single GPU can act in many ways like a stack of SSE+OpenMP+MPI, of course depending on your target performance, a single GPU may not be enough.. The current explosion of GP CPU cores is certainly causing changes in the balance here, but it was not long ago that anyone wanting more than a very few cores was using this kind of stack, and it is not 'easy'.


>Another example is this piece on Forbes:
>http://www.forbes.com/2010/04/29/moores-law-computing-processing-opinions-contributors-bill-dally.html
>
>Moore's Law has always been about economic viability of transistor density and
>integration. It's been conflated by some people to relate to performance, but
>that's inaccurate. CPU performance scaling is not dead - but single threaded gains
>are vastly reduced. Parallel performance on CPUs is still increasing at a good
>clip. The notion that 'multi-core' is somehow a dead end is specious at best, considering
>that GPUs are themselves multi-core architectures.

Quite, the world is full of 'idiots' (too harsh?) looking for a headline.. do we judge technologies on such foolish viewpoints now?

>>I must say that I do welcome and enjoy these articles, but >a lot of people seem
>>to love the 'GPU is no good as it is not a generalised >CPU!' strawman, and then
>>go selecting poor useage cases to defend that - when it is >obvious to anyone who
>>applies even a little critical thinking that they are not.
>
>I'm sure some people hold that view. My perspective is more nuanced and probably more in-line with yours actually.
>
>I see the GPU as a relatively new platform, one that holds a good deal of promise
>for certain highly structured and HPC-like workloads that are free from dependencies.
>It's fundamentally different from a CPU in that it's really a bandwidth optimized
>device, and there are certain trade-offs that implies which make it unsuitable for many workloads/algorithms.
>
>Ultimately, the right balance is a combination of the CPU and GPU. What isn't
>clear is where that balance lies. The notion that you don't need a high performance
>CPU is not particularly credible since even for embarrassingly parallel workloads,
>there tends to be a fraction which is 'serial', and will limit performance gains.

I agree completely, and disagree strongly with people who dont take the time (or dont want to.. sometimes for marketing purposes) to understand GPUs limitations. i just also content that they are fantastic is several VERY useful areas.

>
>When I hear crap like "the only interesting workloads are amenable to GPUs", it's
>quite annoying. Ditto for claimed 100X speed ups.

Did I say that? (not baiting here, just not sure if I slipped up and said that..) - I would say that there are several very interesting workloads that are menable to GPUs, and several of those are much harder/more expensive through other methods (FPGA is a common alternative, and thats much more difficult).

>The programming models are also a huge open question, but beyond the scope of this post.

I would say they have moved forward in leaps and bounds, and also suspect that multicore CPU will get a lot of benifit in the medium term from lessons learned on GPU..

It was one hell of a lot harder 5 years ago, let alone 10..

(sorry, another rushed post, hopefully not too many errors..)
< Previous Post in ThreadNext Post in Thread >
TopicPosted ByDate
Tarek's Hotpar 2010 article onlineDavid Kanter2010/07/27 08:29 AM
  Tarek's Hotpar 2010 article onlineAnon2010/07/27 02:57 PM
    Tarek's Hotpar 2010 article onlineDavid Kanter2010/07/27 11:48 PM
      Tarek's Hotpar 2010 article onlineAnon2010/07/28 03:44 PM
        Tarek's Hotpar 2010 article onlineanon2010/07/28 05:10 PM
        CPU+GPUDavid Kanter2010/07/29 12:38 PM
          CPU+GPUMark Christiansen2010/07/30 07:36 AM
            CPU+GPUhobold2010/07/30 07:54 AM
              CPU+GPUGabriele Svelto2010/07/30 08:18 AM
                CPU+GPUhobold2010/07/30 03:46 PM
              CPU+GPUAaron Spink2010/08/02 06:32 PM
                CPU+GPUhobold2010/08/03 04:33 AM
                  CPU+GPURicardo B2010/08/03 05:19 AM
            CPU+GPU - the software problemVincent Diepeveen2010/07/31 05:50 PM
              CPU+GPU - the software problemmpx2010/08/02 12:19 PM
                CPU+GPU - the software problemRichard Cownie2010/08/02 01:48 PM
                  CPU+GPU - the software problemGabriele Svelto2010/08/03 12:10 AM
                    CPU+GPU - the software problemhobold2010/08/03 04:41 AM
                      CPU+GPU - the software problemRichard Cownie2010/08/03 06:17 AM
                        CPU+GPU - the software problemhobold2010/08/04 07:45 AM
                          NVidia in a holeRichard Cownie2010/08/04 08:23 AM
                            NVidia in a holeRohit2010/08/04 08:33 AM
                              NVidia in a holeRichard Cownie2010/08/04 08:50 AM
                                NVidia in a holeRohit2010/08/05 07:31 AM
                                  NVidia in a holeGroo2010/08/05 02:08 PM
                                NVidia in a holeGroo2010/08/05 02:07 PM
                              NVidia in a holeMark Roulo2010/08/04 09:26 AM
                                NVidia in a holeLaughabee2010/08/18 08:28 AM
                            Can nVidia survive without a sub-$100 GPU market?Mark Roulo2010/08/04 09:29 AM
                              Can nVidia survive without a sub-$100 GPU market?Richard Cownie2010/08/04 09:42 AM
                                Can nVidia survive without a sub-$100 GPU market?Mark Roulo2010/08/04 09:55 AM
                                  Can nVidia survive without a sub-$100 GPU market?Linus Torvalds2010/08/04 11:35 AM
                                    "profit" is subjectiveRichard Cownie2010/08/04 11:49 AM
                                      "profit" is subjectivea reader2010/08/06 04:54 AM
                                        no price warRichard Cownie2010/08/06 07:18 AM
                                  Can nVidia survive without a sub-$100 GPU market?Aaron Spink2010/08/04 12:19 PM
                                  Can nVidia survive without a sub-$100 GPU market?Konrad Schwarz2010/08/06 08:13 AM
                                CPU lineup != GPU lineupRohit2010/08/05 07:43 AM
                                  CPU lineup != GPU lineupRichard Cownie2010/08/05 08:12 AM
                                    CPU lineup != GPU lineupRohit2010/08/05 08:37 AM
                                      CPU lineup != GPU lineupRichard Cownie2010/08/05 08:56 AM
                                        CPU lineup != GPU lineupRohit2010/08/05 11:19 AM
                                          CPU lineup != GPU lineuphobold2010/08/06 04:08 AM
                                            CPU lineup != GPU lineupRohit2010/08/06 07:24 AM
                                              CPU lineup != GPU lineuphobold2010/08/06 07:41 AM
                              Can nVidia survive without a sub-$100 GPU market?David Hess2010/08/04 11:31 AM
                              Can nVidia survive without a sub-$100 GPU market?Ricardo B2010/08/04 01:16 PM
                              Semiconductor economicsDavid Kanter2010/08/05 09:13 AM
                                Semiconductor economicsRichard Cownie2010/08/05 09:51 AM
                                  NV buying ViA could shake up thingsRohit2010/08/05 11:09 AM
                                    NV buying ViA could shake up thingsRichard Cownie2010/08/05 11:25 AM
                                      Not likelyDavid Kanter2010/08/05 02:39 PM
                                        Not likelyrandom2010/08/05 02:49 PM
                                          Not likelyAaron Spink2010/08/05 03:18 PM
                                        Not likelyGroo2010/08/05 10:05 PM
                                        Not likelyRohit2010/08/06 07:25 AM
                                    NV buying ViA could shake up thingsGroo2010/08/05 10:02 PM
                                    NV buying ViA could shake up thingsehud2010/08/06 01:53 AM
                                      NV buying ViA could shake up thingsRohit2010/08/06 07:30 AM
                                        NV buying ViA could shake up thingsKevin G2010/08/06 05:54 PM
                                          ViA buying NV could shake up thingsBrendan2010/08/19 06:23 AM
                                        NV buying ViA could shake up thingsGroo2010/08/06 08:24 PM
                            NVidia in a holempx2010/08/04 12:01 PM
                              NVidia in a holeRichard Cownie2010/08/04 12:31 PM
                                NVidia in a holercf2010/08/04 03:40 PM
                              NVidia in a holeRohit2010/08/05 07:51 AM
                              NVidia in a holeGroo2010/08/05 09:50 PM
                              NVidia in a holeAaron Spink2010/08/05 11:52 PM
                          Anyone else wish Intel would buy Nvidia ? (NT)Silent2010/08/07 01:30 AM
                            Hell noa2010/08/07 08:37 PM
                        CPU+GPU - the software problemno thanks2010/08/04 07:54 PM
                          CPU+GPU - the software problemJouni Osmala2010/08/05 01:41 AM
                            CPU+GPU - the software problemIntelUser20002010/08/05 07:54 PM
                              CPU+GPU - the software problemJouni Osmala2010/08/05 09:36 PM
                            CPU+GPU - the software problemLinus Torvalds2010/08/06 08:08 AM
                              CPU+GPU - the software problemanonymous2010/08/06 09:39 AM
                                CPU+GPU - the software problemLinus Torvalds2010/08/06 10:56 AM
                                  non-existent Via+NV vs real OntarioRichard Cownie2010/08/06 12:12 PM
                              CPU+GPU - the software problemkoby m.2010/08/07 02:06 AM
                  CPU+GPU - the software problemhobold2010/08/03 04:48 AM
                CPU+GPU - the software problemhobold2010/08/03 04:45 AM
                  CPU+GPU - the software problemRichard Cownie2010/08/03 06:22 AM
          CPU+GPUAnon2010/07/30 09:11 PM
            CPU+GPUanon2010/07/31 03:19 AM
            CPU+GPUAaron Spink2010/08/02 06:45 PM
          100x speedups -- here we go againAM2010/08/02 01:37 AM
            100x speedups -- here we go againanon2010/08/02 03:32 AM
              100x speedups -- here we go againanon2010/08/02 11:14 PM
            GPU raw H/W only has about a 10x advantage over a CPUMark Roulo2010/08/02 09:41 AM
              GPU raw H/W only has about a 10x advantage over a CPUMichael S2010/08/02 03:31 PM
                GPU raw H/W only has about a 10x advantage over a CPUanon2010/08/02 09:36 PM
                  GPU raw H/W only has about a 10x advantage over a CPUMichael S2010/08/03 12:41 AM
                    GPU raw H/W only has about a 10x advantage over a CPUMark Roulo2010/08/03 08:22 AM
                      GPU raw H/W only has about a 10x advantage over a CPURohit2010/08/04 07:08 AM
                      There is reasonable explanation...Jouni Osmala2010/08/04 10:10 PM
                  GPU raw H/W only has about a 10x advantage over a CPUMark Roulo2010/08/03 08:19 AM
              simple reality check for youAM2010/08/03 01:45 AM
                simple reality check for younone2010/08/03 07:27 AM
                  simple reality check for youAM2010/08/03 11:50 PM
                    simple reality check for youanon2010/08/04 12:06 AM
                      simple reality check for youAM2010/08/05 03:51 AM
                        catch the same boat as everyone else next time (NT)anon2010/08/05 04:44 AM
                Simple reality checkDavid Kanter2010/08/03 08:24 AM
                  Simple reality checkGabriele Svelto2010/08/03 10:51 PM
                  Simple reality checkAM2010/08/03 11:57 PM
                    Simple reality checkanon2010/08/04 12:12 AM
                      Simple reality checkAM2010/08/05 03:59 AM
                        Simple reality checkanon2010/08/05 04:47 AM
                          Study the papers, troll (NT)AM2010/08/06 12:01 AM
                    Simple reality checkhobold2010/08/04 05:05 AM
                    Simple reality checkDean Kent2010/08/04 09:30 AM
                      Simple reality checkMark Roulo2010/08/04 09:59 AM
                      Suggestion for David Kanter and Mark RouloAM2010/08/05 03:57 AM
                        Suggestion for David Kanter and Mark Roulonone2010/08/05 04:22 AM
                          Suggestion for David Kanter and Mark RouloAM2010/08/06 12:04 AM
                        Suggestion for David Kanter and Mark Roulonone2010/08/05 04:38 AM
                          Suggestion for David Kanter and Mark Rouloanon2010/08/05 04:43 AM
                            Suggestion for David Kanter and Mark RouloAM2010/08/06 12:11 AM
                              Suggestion for David Kanter and Mark Rouloanon2010/08/06 07:52 AM
                          Suggestion for David Kanter and Mark RouloAM2010/08/06 12:10 AM
                            Suggestion for David Kanter and Mark Roulonone2010/08/06 01:35 AM
                              price doesn't matter only in theories (NT)AM2010/08/09 02:13 AM
                            Not apple4appleMichael S2010/08/06 05:13 AM
                              Not apple4appleAM2010/08/09 02:09 AM
                                Not apple4appleMichael S2010/08/09 04:35 AM
                                  Not apple4appleAM2010/08/10 01:05 AM
                                    Not apple4applenone2010/08/10 01:38 AM
                                      Back to topicAM2010/08/11 12:03 AM
                                      Final note (and some remarks on Intel's paper)AM2010/08/13 04:28 AM
                                        Final note (and some remarks on Intel's paper)none2010/08/13 09:10 AM
                                          Final note (and some remarks on Intel's paper)AM2010/08/16 01:14 AM
                                            Final note (and some remarks on Intel's paper)none2010/08/16 06:29 AM
                                              One example of 1000X shown to be wrong.sea2010/08/16 07:55 PM
                                                You're short of some factsAM2010/08/17 01:13 AM
                                                  Maybe you need to read beyond the PR statementSteve Underwood2010/08/17 02:39 AM
                                                    Maybe you need to read beyond the PR statementGroo2010/08/17 08:18 PM
                                                    check the hw they usedAM2010/08/18 03:33 AM
                                                Please realize what Monte Carlo meansVincent Diepeveen2010/08/18 03:28 AM
                                              Final note (and some remarks on Intel's paper)AM2010/08/17 01:18 AM
                                        Final note (and some remarks on Intel's paper)anon2010/08/14 01:22 AM
                                          Final note (and some remarks on Intel's paper)AM2010/08/16 01:15 AM
                                            Final note (and some remarks on Intel's paper)anon2010/08/16 02:10 PM
                                              Final note (and some remarks on Intel's paper)AM2010/08/17 01:15 AM
                                                Final note (and some remarks on Intel's paper)Michael S2010/08/17 02:58 AM
                                                  Final note (and some remarks on Intel's paper)AM2010/08/18 03:17 AM
                                                    Final note (and some remarks on Intel's paper)gallier22010/08/18 04:52 AM
                                                    Final note (and some remarks on Intel's paper)Michael S2010/08/18 05:33 AM
                                                      Final note (and some remarks on Intel's paper)Gabriele Svelto2010/08/18 06:11 AM
                                                      Final note (and some remarks on Intel's paper)Steve Underwood2010/08/18 04:03 PM
                                                        Intel might be moving from ISA to platformhobold2010/08/19 03:58 AM
                                                          Intel might be moving from ISA to platformSteve Underwood2010/08/22 07:00 PM
                                                            Intel might be moving from ISA to platformAnon2010/08/22 10:43 PM
                                                              Intel might be moving from ISA to platformajensen2010/08/23 12:37 AM
                                                                Intel might be moving from ISA to platformMichael S2010/08/23 02:13 AM
                                                                  Intel might be moving from ISA to platformSteve Underwood2010/08/23 02:35 AM
                                                                  Intel might be moving from ISA to platformhobold2010/08/26 04:37 AM
                                                                Intel might be moving from ISA to platformAnon2010/08/23 03:47 PM
                                                              Intel might be moving from ISA to platformSteve Underwood2010/08/23 02:25 AM
                                                                Intel might be moving from ISA to platformhobold2010/08/23 03:03 AM
                                                                  Intel might be moving from ISA to platformSteve Underwood2010/08/23 04:26 AM
                                                                Intel might be moving from ISA to platformAnon2010/08/23 03:55 PM
                                                                Intel might be moving from ISA to platformrwessel2010/08/23 06:41 PM
                                                                  Intel might be moving from ISA to platformSteve Underwood2010/08/23 07:30 PM
                                                                    Intel might be moving from ISA to platformrwessel2010/08/23 09:50 PM
                                                                      Intel might be moving from ISA to platformSteve Underwood2010/08/23 10:34 PM
                                                                        Intel might be moving from ISA to platformrwessel2010/08/24 01:03 AM
                                                              Intel might be moving from ISA to platformIan Ollmann2010/08/23 09:21 PM
                                                          Intel might be moving from ISA to platformajensen2010/08/22 10:36 PM
                                                            Intel might be moving from ISA to platformIan Ollmann2010/08/23 09:45 PM
                                                      Final note (and some remarks on Intel's paper)AM2010/08/19 01:29 AM
                                                        Final note (and some remarks on Intel's paper)Richard2010/08/19 04:51 AM
                                                          I stand corrected.Michael S2010/08/19 05:30 AM
                simple reality check for youanonymous2010/08/03 10:25 AM
                  simple reality check for youAM2010/08/03 11:47 PM
                    simple reality check for youanonymous2010/08/04 10:17 AM
                      simple reality check for youanon2010/08/04 08:33 PM
                      SPECint/FPAM2010/08/05 03:49 AM
                  simple reality check for youRohit2010/08/13 08:31 PM
                    simple reality check for youhobold2010/08/16 06:29 AM
              maybe texture caches ?Richard Cownie2010/08/03 09:27 AM
                maybe texture caches ?none2010/08/03 09:59 AM
                  maybe texture caches ?Richard Cownie2010/08/03 10:15 AM
                    maybe texture caches ?Mark Roulo2010/08/03 10:23 AM
                      maybe texture caches ?Richard Cownie2010/08/03 10:33 AM
                        maybe texture caches ?Mark Roulo2010/08/03 10:37 AM
                          the Hess paperRichard Cownie2010/08/03 11:16 AM
                    maybe texture caches ?Aaron Spink2010/08/03 01:23 PM
                      maybe texture caches ?Mark Roulo2010/08/03 04:13 PM
                        maybe texture caches ?Michael S2010/08/04 01:47 AM
                  maybe texture caches ?Michael S2010/08/03 10:23 AM
                maybe texture caches ?Mark Roulo2010/08/03 10:19 AM
                  maybe texture caches ?Richard Cownie2010/08/03 10:42 AM
                Cache to cache comparisonDavid Kanter2010/08/03 12:28 PM
                  Cache to cache comparisonRichard Cownie2010/08/03 12:52 PM
                    Cache to cache comparisonDavid Kanter2010/08/03 01:21 PM
                      Cache to cache comparisonRichard Cownie2010/08/03 03:55 PM
                      Cache to cache comparisonGabriele Svelto2010/08/03 11:23 PM
                        Nehalem vs Core2Richard Cownie2010/08/04 04:24 PM
                          Nehalem vs Core2IntelUser20002010/08/04 07:26 PM
                          Nehalem vs Core2Gabriele Svelto2010/08/04 10:51 PM
                            Nehalem vs Core2Richard Cownie2010/08/05 03:27 AM
                              Nehalem vs Core2Gabriele Svelto2010/08/05 06:00 AM
                                Nehalem vs Core2Richard Cownie2010/08/05 07:10 AM
                                  Nehalem vs Core2Gabriele Svelto2010/08/05 07:41 AM
                                    Nehalem vs Core2Michael S2010/08/05 10:02 AM
                              Nehalem vs Core2Michael S2010/08/05 10:12 AM
                          Nehalem vs Core2Michael S2010/08/05 12:36 AM
                            Nehalem vs Core2Richard Cownie2010/08/05 08:33 AM
                              Nehalem vs Core2Michael S2010/08/05 09:44 AM
                                Nehalem vs Core2Richard Cownie2010/08/05 10:03 AM
                                  Nehalem vs Core2Michael S2010/08/05 04:21 PM
                                    Nehalem vs Core2Richard Cownie2010/08/05 07:15 PM
                                      I'm wrong, you're right - i's 2 dieRichard Cownie2010/08/05 07:21 PM
                                    Nehalem vs Core2Richard Cownie2010/08/06 08:12 AM
                                      Nehalem vs Core2Michael S2010/08/06 08:33 AM
                                      Nehalem L1 cache latency a hedge for higher clock speeds?Mark Roulo2010/08/06 08:36 AM
                                        Nehalem L1 cache latency a hedge for higher clock speeds?Kevin G2010/08/07 10:14 AM
                                          Nehalem L1 cache latency a hedge for higher clock speeds?IntelUser20002010/08/07 02:32 PM
                                        Nehalem L1 cache latency a hedge for higher clock speeds?someone2010/08/07 02:35 PM
                          difference between C2D/C2Q chipsetsMichael S2010/08/05 12:57 AM
                          Nehalem core goals ... my takeMark Roulo2010/08/06 07:53 AM
                            Bulldozer single-threadRichard Cownie2010/08/06 04:05 PM
                          Nehalem vs Core2Carlie Coats2010/08/07 08:41 AM
                            scalingMichael S2010/08/07 11:12 AM
                            why X5460?Michael S2010/08/07 11:12 AM
                              why X5460?Carlie Coats2010/08/08 04:34 AM
                                why X5460?Michael S2010/08/08 04:42 AM
                                  Polling/PIO based COMM libraries?Michael S2010/08/08 05:14 AM
                    Cache to cache comparisonAnts Aasma2010/08/04 10:00 AM
                      Cache to cache comparisonRichard Cownie2010/08/04 10:08 AM
                        Cache to cache comparisonAnts Aasma2010/08/04 11:28 AM
                      Cache to cache comparisonMichael S2010/08/04 11:33 AM
                        Cache to cache comparisonMark Roulo2010/08/04 12:29 PM
                          Cache to cache comparisonAnts Aasma2010/08/04 01:10 PM
                            Fermi and G200 instruction latencyMark Roulo2010/08/04 02:10 PM
                              Fermi and G200 instruction latencyAnts Aasma2010/08/04 03:01 PM
                        Cache to cache comparisonAnts Aasma2010/08/04 01:00 PM
                        Register read comparisonDavid Kanter2010/08/05 09:16 AM
                          Register read comparisonGabriele Svelto2010/08/05 11:42 PM
                maybe texture caches ?Aaron Spink2010/08/03 01:17 PM
              GPU raw H/W only has about a 10x advantage over a CPURichard2010/08/09 04:58 AM
                GPU raw H/W only has about a 10x advantage over a CPUnone2010/08/09 05:19 AM
                  GPU raw H/W only has about a 10x advantage over a CPUVincent Diepeveen2010/08/24 07:18 AM
                    Forget it, they won't hear itanon2010/08/24 04:37 PM
                    GPU raw H/W only has about a 10x advantage over a CPURichard2010/08/25 01:03 PM
                This forum supports pre tagMichael S2010/08/09 06:03 AM
    Tarek's Hotpar 2010 article onlineRich Vuduc2010/07/28 10:24 AM
      Tarek's Hotpar 2010 article onlineAnon2010/07/28 04:18 PM
  Low hanging fruitMoritz2011/02/03 12:20 PM
    Low hanging fruitJukka Larja2011/02/06 05:01 AM
Reply to this Topic
Name:
Email:
Topic:
Body: No Text
How do you spell tangerine? 🍊