Tarek's Hotpar 2010 article online

Article: Parallelism at HotPar 2010
By: Anon (no.delete@this.email.com), July 28, 2010 4:18 pm
Room: Moderated Discussions
Thank you for your detailed reply, I hope my rather hurried initial query did not seem too critical.

I am also quite interested in the level of optimisations, because optimising GPU code is one of my major functions these days (well, also directing others in the right direction).
Interestingly also I cannot comment too directly on the codes you did use (however the information is useful) as we very rarely use pre-existing libraries ourselves, and hand-tune all our code per application.
I would, as you say, assume that NVidia have competent matrix libraries, so long as the data layouts, etc are suitable.

One of the very enlightening tools we use when tuning is a 'soak' application that runs semi-indepentently and can be asked to consume GPU compute, cache, external bandwidth, bus bandwidth, etc as required - we use this to verify that our codes are fully utilising a specific area of capability, and often it allows us to find the areas we are not - which is sometimes surprising/enlightening.

I will be reading your mentioned papers with interest (once I can clear a few projects, grumble..)

Small-n is always an issue, and we actually often push these cases back to CPU (well, never move them off CPU), as they are not only inefficient, but can actually cause large performance losses in other simultaneous tasks through stalling of significant resources. Luckily many of our datasets are TByte...

As to question 2, I hope I didnt overstate that I had spoken to NVidia about these cases, I was simply reading the comment that NVidia seemed to indicate that these were not what they would consider target applications..

There are most definately issues with how parts of NVidia choose to promote 'GPGPU' (a term certainly not originated from NVidia, and one I dislike), and I dont believe that many in 'the industry' really believe that GPUs are even close to being GP.. They are a very good tool for a somewhat restricted subset of applications, where runtimes and datasets are suitable large, codes map well to the GPU model, and development times can be suitable 'extended'..

IMHO, the GP in GPGPU is heavily mis-understood - from my position it if refering to the fact that, unlike not that long ago, these days you can use what appear to be 'normal' languages, and write code that makes use of compute-type interfaces and functionality, whereas in the past using a GPU was a matter of making all problems look like image rendering - a much more difficult and limiting task.
I well remember when readback from a GPU was (artificially) limited to PCI speed on AGP busses, because the vendors did not care about readback..
To me, 'General Purpose' means the GPU is no longer just about transformation, lighting, and scanconversion..

I feel that our viewpoints are very aligned, and I would most strongly agree with your heterogeneous systems view. I do find it very tiresome that there are 2 other camps that seem to feel GPU are a threat or target for some reason.

1 - the 'parallel is too hard, and doesnt work anyway!' crowd, how want everything to be considered as scalar code, and only want faster CPUs - I also love faster scalar CPUs, with faster GPUs alongside!

2 - the 'GPUs are toys' brigade, who like to point out the weaknesses of GPUs (of which there are many!) and ignore their strengths, they often like to point to systems 20% faster with 20 times the budget..

I myself believe that the GPGPU approach has opened a whole new area of price/performance for a range of important codes, however that range is somewhat limited, which is probably for the best - to make a truely general-purpose GPU would probably reduce its performance to that of a general purpose CPU (surprise!), and Intel probably do that better anyway!



Rich Vuduc (richie@cc.gatech.edu) on 7/28/10 wrote:
---------------------------
>You raise two fair questions, which I'd like to address.
>
>[Question 1] Regarding how well tuned the GPU codes are, I'd be hypocritically
>violating Bailey's Rule #6 (use a bad baseline) if we didn't put in some effort.
>Whether the effort is *fair* is up for grabs, but I'll say the following:
>
>(a) For sparse matrix-vector multiply, the "best" GPU codes shown are the best
>of NVIDIA's implementations by Bell & Garland, as well as those of my student, Jee
>Choi. These are all tuned on the GPU fairly well.
>
>(b) For the sparse direct solver, we are using CUBLAS. It's debatable whether these
>implementations are the best out there, but one would trust they are reasonably
>well-tuned, though things could be better.
>
>(c) For the fast multipole method (FMM)---whose results are shown in single-precision,
>by the way---we did not write a Mickey Mouse code. The FMM uses a "direct 'n^2'
>n-body" computation as a subroutine. The updated version of our HotPar'10 talk slides,
>which appeared in a Dept. of Energy-hosted meeting called "SciDAC'10", show that
>when 'n' is sufficiently large this subroutine gets 640 Gflop/s (65% of peak) on
>a Fermi card. So, even if it's not the best code out there, I think we can claim
>it's not unreasonable. The only problem is that for the FMM, this subroutine has
>to run fast when 'n' is relatively much smaller, which is where the GPU advantage
>decreases. A fair question, then, is whether we can make this subroutine fast for
>small 'n'. We are actively doing this on the GPU this summer, because we see the
>GPU as an integral assistant in a full FMM code on a likely future system with both
>multicore CPU and GPU components. (Shameless plug: We are part of a team whose upcoming
>paper at Supercomputing'10 uses both CPU and GPU for the FMM.) Now, we're not done
>yet but hope to make significant in-roads for this case.
>
>[Question 2] You say that NVIDIA does not consider the computations we considered
>to be the prime targets of their systems. I suppose this is possible. However, they
>clearly have a high-performance computing strategy, and in HPC, we care about things
>like sparse iterative and direct solvers, as well as scalable n-body problems. I
>think physically-realistic games and graphics care about these things, too, though
>I'll admit right away that I'm not an expert on those kinds of apps. But just to
>throw it out there, a friend of mine at Lucas Arts, who led the physics engine development
>on The Force Unleased game, uses a finite-element solver to simulate how objects
>deform when you use the force on them. So if the computations we care about are
>not within the scope of what a GPU should be good at, it begs the question in my
>mind of how "general-purpose" a GPGPU is.
>
>I'd like to conclude by saying that I'm a big believer in heterogeneous systems
>with GPU components! We do a lot of GPU work at Georgia Tech and are heavily investing
>our research efforts on how to use these systems. The only real point of the talk
>was to say that, for the benefit of the applications development community that
>has to spend time writing all this code, we should forget the marketing hype, set
>realistic expectations, and do the hard work of figuring out how best to use these
>computational resources and building better tools.
>
>-- Rich V. @ Georgia Tech
>
>Anon (no@email.com) on 7/27/10 wrote:
>---------------------------
>>David Kanter (dkanter@realworldtech.com) on 7/27/10 wrote:
>>---------------------------
>>>Today is shaping up to be an excellent day for a number of reasons:
>>>
>>>We have an excellent new contributor, Tarek Chammah. Tarek is a graduate student
>>>at the University of Waterloo who specializes in software approaches to parallelization,
>>>including run-times, languages, APIs, etc.
>>>
>>>I recently had the opportunity to go to lunch with Tarek and we had an excellent
>>>time and I learned quite a lot about the trends on the software side of the equation.
>>>One of the points that Tarek emphatically made is that with the emergence of parallel
>>>processing, the software is becoming equally important as the hardware. Times are
>>>a changing, and it's not just about your good old compiler getting code into shape
>>>for the hardware; software is truly an essential part of the glue that binds together
>>>the system, and I hope to be able to discuss software more in the future at RWT.
>>>
>>>Second, Tarek has provided us with an excellent article covering some of the highlights
>>>of the HotPar 2010 workshop. Hot Par was held in Berkeley this year, and included
>>>a fair number of papers - but almost all of them were software focused. This is
>>>a nice change of pace from our usual coverage of ISSCC, Hot Chips or IEDM:
>>>
>>>http://www.realworldtech.com/page.cfm?ArticleID=RWT072610001641
>>>
>>>Please join me in thanking Tarek for his contribution, and I look forward to some lively discussions.
>>>
>>>
>>>David
>>
>>I would most certainly agree that more input on this subject is most welcome.
>>
>>I have two discussion questions for the section "The Limits of GPUs"
>>
>>Firstly, a lot of this content seem to run along the lines of 'untuned Intel xxx
>>code was a lot slower than the GPU, but then we spent a lot of time rewriting/tuning
>>the CPu code, and it got faster!' however no mention seems to be made of similar tuning efforts in GPU code.
>>I think most of the people involved in GPU Cuda programming will agree that it
>>is significantly HARDER to extract full potential from GPU code, although for 'suitable'
>>codes the gains are even larger - this looks/feels like one of these cases.... highly
>>tuned CPU code versus basic GPU code.
>>
>>It is also interesting (and the full information is not presented) that in the
>>second group of cases, we seems to be comparing DP codes on Tesla C1060, rather than the most certanly current 2070.
>>Now, a C1060 has around an 8:1 SP:DP ratio. The 2060 closer to 2:1, and nearly
>>7 TIMES the peak DP with a single GPU than the 1060... I do not doubt that the Nehalem
>>system is not the fastest current either, however I doubt a system 7 times faster could be found.
>>Secondly in this case, the codes being looked at are, as NVidia appears to have
>>pointed out, not really prime targets of their systems anyway (any yet their OLD systems do pretty well).
>>
>>Now, this could be seen as valid 'limits' of GPUs.
>>1 - Older implementations (and some current) are not great at DP.
>>2 - GPUs are very optimisation sensitive (tools are quite new, and they are not that flexible compute devices)
>>3 - GPUs performance varies strongly, not all target applications are suitable.
>>
>
< Previous Post in ThreadNext Post in Thread >
TopicPosted ByDate
Tarek's Hotpar 2010 article onlineDavid Kanter2010/07/27 08:29 AM
  Tarek's Hotpar 2010 article onlineAnon2010/07/27 02:57 PM
    Tarek's Hotpar 2010 article onlineDavid Kanter2010/07/27 11:48 PM
      Tarek's Hotpar 2010 article onlineAnon2010/07/28 03:44 PM
        Tarek's Hotpar 2010 article onlineanon2010/07/28 05:10 PM
        CPU+GPUDavid Kanter2010/07/29 12:38 PM
          CPU+GPUMark Christiansen2010/07/30 07:36 AM
            CPU+GPUhobold2010/07/30 07:54 AM
              CPU+GPUGabriele Svelto2010/07/30 08:18 AM
                CPU+GPUhobold2010/07/30 03:46 PM
              CPU+GPUAaron Spink2010/08/02 06:32 PM
                CPU+GPUhobold2010/08/03 04:33 AM
                  CPU+GPURicardo B2010/08/03 05:19 AM
            CPU+GPU - the software problemVincent Diepeveen2010/07/31 05:50 PM
              CPU+GPU - the software problemmpx2010/08/02 12:19 PM
                CPU+GPU - the software problemRichard Cownie2010/08/02 01:48 PM
                  CPU+GPU - the software problemGabriele Svelto2010/08/03 12:10 AM
                    CPU+GPU - the software problemhobold2010/08/03 04:41 AM
                      CPU+GPU - the software problemRichard Cownie2010/08/03 06:17 AM
                        CPU+GPU - the software problemhobold2010/08/04 07:45 AM
                          NVidia in a holeRichard Cownie2010/08/04 08:23 AM
                            NVidia in a holeRohit2010/08/04 08:33 AM
                              NVidia in a holeRichard Cownie2010/08/04 08:50 AM
                                NVidia in a holeRohit2010/08/05 07:31 AM
                                  NVidia in a holeGroo2010/08/05 02:08 PM
                                NVidia in a holeGroo2010/08/05 02:07 PM
                              NVidia in a holeMark Roulo2010/08/04 09:26 AM
                                NVidia in a holeLaughabee2010/08/18 08:28 AM
                            Can nVidia survive without a sub-$100 GPU market?Mark Roulo2010/08/04 09:29 AM
                              Can nVidia survive without a sub-$100 GPU market?Richard Cownie2010/08/04 09:42 AM
                                Can nVidia survive without a sub-$100 GPU market?Mark Roulo2010/08/04 09:55 AM
                                  Can nVidia survive without a sub-$100 GPU market?Linus Torvalds2010/08/04 11:35 AM
                                    "profit" is subjectiveRichard Cownie2010/08/04 11:49 AM
                                      "profit" is subjectivea reader2010/08/06 04:54 AM
                                        no price warRichard Cownie2010/08/06 07:18 AM
                                  Can nVidia survive without a sub-$100 GPU market?Aaron Spink2010/08/04 12:19 PM
                                  Can nVidia survive without a sub-$100 GPU market?Konrad Schwarz2010/08/06 08:13 AM
                                CPU lineup != GPU lineupRohit2010/08/05 07:43 AM
                                  CPU lineup != GPU lineupRichard Cownie2010/08/05 08:12 AM
                                    CPU lineup != GPU lineupRohit2010/08/05 08:37 AM
                                      CPU lineup != GPU lineupRichard Cownie2010/08/05 08:56 AM
                                        CPU lineup != GPU lineupRohit2010/08/05 11:19 AM
                                          CPU lineup != GPU lineuphobold2010/08/06 04:08 AM
                                            CPU lineup != GPU lineupRohit2010/08/06 07:24 AM
                                              CPU lineup != GPU lineuphobold2010/08/06 07:41 AM
                              Can nVidia survive without a sub-$100 GPU market?David Hess2010/08/04 11:31 AM
                              Can nVidia survive without a sub-$100 GPU market?Ricardo B2010/08/04 01:16 PM
                              Semiconductor economicsDavid Kanter2010/08/05 09:13 AM
                                Semiconductor economicsRichard Cownie2010/08/05 09:51 AM
                                  NV buying ViA could shake up thingsRohit2010/08/05 11:09 AM
                                    NV buying ViA could shake up thingsRichard Cownie2010/08/05 11:25 AM
                                      Not likelyDavid Kanter2010/08/05 02:39 PM
                                        Not likelyrandom2010/08/05 02:49 PM
                                          Not likelyAaron Spink2010/08/05 03:18 PM
                                        Not likelyGroo2010/08/05 10:05 PM
                                        Not likelyRohit2010/08/06 07:25 AM
                                    NV buying ViA could shake up thingsGroo2010/08/05 10:02 PM
                                    NV buying ViA could shake up thingsehud2010/08/06 01:53 AM
                                      NV buying ViA could shake up thingsRohit2010/08/06 07:30 AM
                                        NV buying ViA could shake up thingsKevin G2010/08/06 05:54 PM
                                          ViA buying NV could shake up thingsBrendan2010/08/19 06:23 AM
                                        NV buying ViA could shake up thingsGroo2010/08/06 08:24 PM
                            NVidia in a holempx2010/08/04 12:01 PM
                              NVidia in a holeRichard Cownie2010/08/04 12:31 PM
                                NVidia in a holercf2010/08/04 03:40 PM
                              NVidia in a holeRohit2010/08/05 07:51 AM
                              NVidia in a holeGroo2010/08/05 09:50 PM
                              NVidia in a holeAaron Spink2010/08/05 11:52 PM
                          Anyone else wish Intel would buy Nvidia ? (NT)Silent2010/08/07 01:30 AM
                            Hell noa2010/08/07 08:37 PM
                        CPU+GPU - the software problemno thanks2010/08/04 07:54 PM
                          CPU+GPU - the software problemJouni Osmala2010/08/05 01:41 AM
                            CPU+GPU - the software problemIntelUser20002010/08/05 07:54 PM
                              CPU+GPU - the software problemJouni Osmala2010/08/05 09:36 PM
                            CPU+GPU - the software problemLinus Torvalds2010/08/06 08:08 AM
                              CPU+GPU - the software problemanonymous2010/08/06 09:39 AM
                                CPU+GPU - the software problemLinus Torvalds2010/08/06 10:56 AM
                                  non-existent Via+NV vs real OntarioRichard Cownie2010/08/06 12:12 PM
                              CPU+GPU - the software problemkoby m.2010/08/07 02:06 AM
                  CPU+GPU - the software problemhobold2010/08/03 04:48 AM
                CPU+GPU - the software problemhobold2010/08/03 04:45 AM
                  CPU+GPU - the software problemRichard Cownie2010/08/03 06:22 AM
          CPU+GPUAnon2010/07/30 09:11 PM
            CPU+GPUanon2010/07/31 03:19 AM
            CPU+GPUAaron Spink2010/08/02 06:45 PM
          100x speedups -- here we go againAM2010/08/02 01:37 AM
            100x speedups -- here we go againanon2010/08/02 03:32 AM
              100x speedups -- here we go againanon2010/08/02 11:14 PM
            GPU raw H/W only has about a 10x advantage over a CPUMark Roulo2010/08/02 09:41 AM
              GPU raw H/W only has about a 10x advantage over a CPUMichael S2010/08/02 03:31 PM
                GPU raw H/W only has about a 10x advantage over a CPUanon2010/08/02 09:36 PM
                  GPU raw H/W only has about a 10x advantage over a CPUMichael S2010/08/03 12:41 AM
                    GPU raw H/W only has about a 10x advantage over a CPUMark Roulo2010/08/03 08:22 AM
                      GPU raw H/W only has about a 10x advantage over a CPURohit2010/08/04 07:08 AM
                      There is reasonable explanation...Jouni Osmala2010/08/04 10:10 PM
                  GPU raw H/W only has about a 10x advantage over a CPUMark Roulo2010/08/03 08:19 AM
              simple reality check for youAM2010/08/03 01:45 AM
                simple reality check for younone2010/08/03 07:27 AM
                  simple reality check for youAM2010/08/03 11:50 PM
                    simple reality check for youanon2010/08/04 12:06 AM
                      simple reality check for youAM2010/08/05 03:51 AM
                        catch the same boat as everyone else next time (NT)anon2010/08/05 04:44 AM
                Simple reality checkDavid Kanter2010/08/03 08:24 AM
                  Simple reality checkGabriele Svelto2010/08/03 10:51 PM
                  Simple reality checkAM2010/08/03 11:57 PM
                    Simple reality checkanon2010/08/04 12:12 AM
                      Simple reality checkAM2010/08/05 03:59 AM
                        Simple reality checkanon2010/08/05 04:47 AM
                          Study the papers, troll (NT)AM2010/08/06 12:01 AM
                    Simple reality checkhobold2010/08/04 05:05 AM
                    Simple reality checkDean Kent2010/08/04 09:30 AM
                      Simple reality checkMark Roulo2010/08/04 09:59 AM
                      Suggestion for David Kanter and Mark RouloAM2010/08/05 03:57 AM
                        Suggestion for David Kanter and Mark Roulonone2010/08/05 04:22 AM
                          Suggestion for David Kanter and Mark RouloAM2010/08/06 12:04 AM
                        Suggestion for David Kanter and Mark Roulonone2010/08/05 04:38 AM
                          Suggestion for David Kanter and Mark Rouloanon2010/08/05 04:43 AM
                            Suggestion for David Kanter and Mark RouloAM2010/08/06 12:11 AM
                              Suggestion for David Kanter and Mark Rouloanon2010/08/06 07:52 AM
                          Suggestion for David Kanter and Mark RouloAM2010/08/06 12:10 AM
                            Suggestion for David Kanter and Mark Roulonone2010/08/06 01:35 AM
                              price doesn't matter only in theories (NT)AM2010/08/09 02:13 AM
                            Not apple4appleMichael S2010/08/06 05:13 AM
                              Not apple4appleAM2010/08/09 02:09 AM
                                Not apple4appleMichael S2010/08/09 04:35 AM
                                  Not apple4appleAM2010/08/10 01:05 AM
                                    Not apple4applenone2010/08/10 01:38 AM
                                      Back to topicAM2010/08/11 12:03 AM
                                      Final note (and some remarks on Intel's paper)AM2010/08/13 04:28 AM
                                        Final note (and some remarks on Intel's paper)none2010/08/13 09:10 AM
                                          Final note (and some remarks on Intel's paper)AM2010/08/16 01:14 AM
                                            Final note (and some remarks on Intel's paper)none2010/08/16 06:29 AM
                                              One example of 1000X shown to be wrong.sea2010/08/16 07:55 PM
                                                You're short of some factsAM2010/08/17 01:13 AM
                                                  Maybe you need to read beyond the PR statementSteve Underwood2010/08/17 02:39 AM
                                                    Maybe you need to read beyond the PR statementGroo2010/08/17 08:18 PM
                                                    check the hw they usedAM2010/08/18 03:33 AM
                                                Please realize what Monte Carlo meansVincent Diepeveen2010/08/18 03:28 AM
                                              Final note (and some remarks on Intel's paper)AM2010/08/17 01:18 AM
                                        Final note (and some remarks on Intel's paper)anon2010/08/14 01:22 AM
                                          Final note (and some remarks on Intel's paper)AM2010/08/16 01:15 AM
                                            Final note (and some remarks on Intel's paper)anon2010/08/16 02:10 PM
                                              Final note (and some remarks on Intel's paper)AM2010/08/17 01:15 AM
                                                Final note (and some remarks on Intel's paper)Michael S2010/08/17 02:58 AM
                                                  Final note (and some remarks on Intel's paper)AM2010/08/18 03:17 AM
                                                    Final note (and some remarks on Intel's paper)gallier22010/08/18 04:52 AM
                                                    Final note (and some remarks on Intel's paper)Michael S2010/08/18 05:33 AM
                                                      Final note (and some remarks on Intel's paper)Gabriele Svelto2010/08/18 06:11 AM
                                                      Final note (and some remarks on Intel's paper)Steve Underwood2010/08/18 04:03 PM
                                                        Intel might be moving from ISA to platformhobold2010/08/19 03:58 AM
                                                          Intel might be moving from ISA to platformSteve Underwood2010/08/22 07:00 PM
                                                            Intel might be moving from ISA to platformAnon2010/08/22 10:43 PM
                                                              Intel might be moving from ISA to platformajensen2010/08/23 12:37 AM
                                                                Intel might be moving from ISA to platformMichael S2010/08/23 02:13 AM
                                                                  Intel might be moving from ISA to platformSteve Underwood2010/08/23 02:35 AM
                                                                  Intel might be moving from ISA to platformhobold2010/08/26 04:37 AM
                                                                Intel might be moving from ISA to platformAnon2010/08/23 03:47 PM
                                                              Intel might be moving from ISA to platformSteve Underwood2010/08/23 02:25 AM
                                                                Intel might be moving from ISA to platformhobold2010/08/23 03:03 AM
                                                                  Intel might be moving from ISA to platformSteve Underwood2010/08/23 04:26 AM
                                                                Intel might be moving from ISA to platformAnon2010/08/23 03:55 PM
                                                                Intel might be moving from ISA to platformrwessel2010/08/23 06:41 PM
                                                                  Intel might be moving from ISA to platformSteve Underwood2010/08/23 07:30 PM
                                                                    Intel might be moving from ISA to platformrwessel2010/08/23 09:50 PM
                                                                      Intel might be moving from ISA to platformSteve Underwood2010/08/23 10:34 PM
                                                                        Intel might be moving from ISA to platformrwessel2010/08/24 01:03 AM
                                                              Intel might be moving from ISA to platformIan Ollmann2010/08/23 09:21 PM
                                                          Intel might be moving from ISA to platformajensen2010/08/22 10:36 PM
                                                            Intel might be moving from ISA to platformIan Ollmann2010/08/23 09:45 PM
                                                      Final note (and some remarks on Intel's paper)AM2010/08/19 01:29 AM
                                                        Final note (and some remarks on Intel's paper)Richard2010/08/19 04:51 AM
                                                          I stand corrected.Michael S2010/08/19 05:30 AM
                simple reality check for youanonymous2010/08/03 10:25 AM
                  simple reality check for youAM2010/08/03 11:47 PM
                    simple reality check for youanonymous2010/08/04 10:17 AM
                      simple reality check for youanon2010/08/04 08:33 PM
                      SPECint/FPAM2010/08/05 03:49 AM
                  simple reality check for youRohit2010/08/13 08:31 PM
                    simple reality check for youhobold2010/08/16 06:29 AM
              maybe texture caches ?Richard Cownie2010/08/03 09:27 AM
                maybe texture caches ?none2010/08/03 09:59 AM
                  maybe texture caches ?Richard Cownie2010/08/03 10:15 AM
                    maybe texture caches ?Mark Roulo2010/08/03 10:23 AM
                      maybe texture caches ?Richard Cownie2010/08/03 10:33 AM
                        maybe texture caches ?Mark Roulo2010/08/03 10:37 AM
                          the Hess paperRichard Cownie2010/08/03 11:16 AM
                    maybe texture caches ?Aaron Spink2010/08/03 01:23 PM
                      maybe texture caches ?Mark Roulo2010/08/03 04:13 PM
                        maybe texture caches ?Michael S2010/08/04 01:47 AM
                  maybe texture caches ?Michael S2010/08/03 10:23 AM
                maybe texture caches ?Mark Roulo2010/08/03 10:19 AM
                  maybe texture caches ?Richard Cownie2010/08/03 10:42 AM
                Cache to cache comparisonDavid Kanter2010/08/03 12:28 PM
                  Cache to cache comparisonRichard Cownie2010/08/03 12:52 PM
                    Cache to cache comparisonDavid Kanter2010/08/03 01:21 PM
                      Cache to cache comparisonRichard Cownie2010/08/03 03:55 PM
                      Cache to cache comparisonGabriele Svelto2010/08/03 11:23 PM
                        Nehalem vs Core2Richard Cownie2010/08/04 04:24 PM
                          Nehalem vs Core2IntelUser20002010/08/04 07:26 PM
                          Nehalem vs Core2Gabriele Svelto2010/08/04 10:51 PM
                            Nehalem vs Core2Richard Cownie2010/08/05 03:27 AM
                              Nehalem vs Core2Gabriele Svelto2010/08/05 06:00 AM
                                Nehalem vs Core2Richard Cownie2010/08/05 07:10 AM
                                  Nehalem vs Core2Gabriele Svelto2010/08/05 07:41 AM
                                    Nehalem vs Core2Michael S2010/08/05 10:02 AM
                              Nehalem vs Core2Michael S2010/08/05 10:12 AM
                          Nehalem vs Core2Michael S2010/08/05 12:36 AM
                            Nehalem vs Core2Richard Cownie2010/08/05 08:33 AM
                              Nehalem vs Core2Michael S2010/08/05 09:44 AM
                                Nehalem vs Core2Richard Cownie2010/08/05 10:03 AM
                                  Nehalem vs Core2Michael S2010/08/05 04:21 PM
                                    Nehalem vs Core2Richard Cownie2010/08/05 07:15 PM
                                      I'm wrong, you're right - i's 2 dieRichard Cownie2010/08/05 07:21 PM
                                    Nehalem vs Core2Richard Cownie2010/08/06 08:12 AM
                                      Nehalem vs Core2Michael S2010/08/06 08:33 AM
                                      Nehalem L1 cache latency a hedge for higher clock speeds?Mark Roulo2010/08/06 08:36 AM
                                        Nehalem L1 cache latency a hedge for higher clock speeds?Kevin G2010/08/07 10:14 AM
                                          Nehalem L1 cache latency a hedge for higher clock speeds?IntelUser20002010/08/07 02:32 PM
                                        Nehalem L1 cache latency a hedge for higher clock speeds?someone2010/08/07 02:35 PM
                          difference between C2D/C2Q chipsetsMichael S2010/08/05 12:57 AM
                          Nehalem core goals ... my takeMark Roulo2010/08/06 07:53 AM
                            Bulldozer single-threadRichard Cownie2010/08/06 04:05 PM
                          Nehalem vs Core2Carlie Coats2010/08/07 08:41 AM
                            scalingMichael S2010/08/07 11:12 AM
                            why X5460?Michael S2010/08/07 11:12 AM
                              why X5460?Carlie Coats2010/08/08 04:34 AM
                                why X5460?Michael S2010/08/08 04:42 AM
                                  Polling/PIO based COMM libraries?Michael S2010/08/08 05:14 AM
                    Cache to cache comparisonAnts Aasma2010/08/04 10:00 AM
                      Cache to cache comparisonRichard Cownie2010/08/04 10:08 AM
                        Cache to cache comparisonAnts Aasma2010/08/04 11:28 AM
                      Cache to cache comparisonMichael S2010/08/04 11:33 AM
                        Cache to cache comparisonMark Roulo2010/08/04 12:29 PM
                          Cache to cache comparisonAnts Aasma2010/08/04 01:10 PM
                            Fermi and G200 instruction latencyMark Roulo2010/08/04 02:10 PM
                              Fermi and G200 instruction latencyAnts Aasma2010/08/04 03:01 PM
                        Cache to cache comparisonAnts Aasma2010/08/04 01:00 PM
                        Register read comparisonDavid Kanter2010/08/05 09:16 AM
                          Register read comparisonGabriele Svelto2010/08/05 11:42 PM
                maybe texture caches ?Aaron Spink2010/08/03 01:17 PM
              GPU raw H/W only has about a 10x advantage over a CPURichard2010/08/09 04:58 AM
                GPU raw H/W only has about a 10x advantage over a CPUnone2010/08/09 05:19 AM
                  GPU raw H/W only has about a 10x advantage over a CPUVincent Diepeveen2010/08/24 07:18 AM
                    Forget it, they won't hear itanon2010/08/24 04:37 PM
                    GPU raw H/W only has about a 10x advantage over a CPURichard2010/08/25 01:03 PM
                This forum supports pre tagMichael S2010/08/09 06:03 AM
    Tarek's Hotpar 2010 article onlineRich Vuduc2010/07/28 10:24 AM
      Tarek's Hotpar 2010 article onlineAnon2010/07/28 04:18 PM
  Low hanging fruitMoritz2011/02/03 12:20 PM
    Low hanging fruitJukka Larja2011/02/06 05:01 AM
Reply to this Topic
Name:
Email:
Topic:
Body: No Text
How do you spell tangerine? 🍊