Latency and HPC Workloads

Article: Intel's Near-Threshold Voltage Computing and Applications
By: anon (anon.delete@this.anon.com), October 16, 2012 6:48 pm
Room: Moderated Discussions
Robert Myers (rbmyersusa.delete@this.gmail.com) on October 16, 2012 9:56 am wrote:
> anon (anon.delete@this.anon.com) on October 16, 2012 8:17 am wrote:
>
> >
>
> > Interesting mindset: everybody else disagrees with me,
> > therefore
> everybody else is wrong.
> >
> > Although there are well known exceptions
>
> > where such mindset has turned an industry or scientific field upside
> down,
> > 99.99x% of the time, it comes from a range of people from the one
> who is very
> > good but does not grasp a particular aspect, down to the
> complete
> > crackpot.
> >
> > Perhaps you are an exception. I would
> like to hear more about the
> > problems and your ideas how to fix them, if
> you would spare the time. (I assume
> > that "everyone stop what you're
> doing" is not actually your proposal!)
> >
>
> Even though you post
> anonymously, and even though you are personally insulting, I'm going to answer
> your post, as you ask most broadly. In the future, if you don't want to be
> thought of as a crackpot yourself, you might leave off speculation about who is
> and who is not a crackpot.

What I said is just facts, but I didn't call you a crackpot as such, but you would agree that many people with this mindset are crackpots.

And you really should think of anonymous posters as crackpots. There is really no other sane way to proceed on the internet.

> Just as soon as the players I have mentioned, plus
> anyone else who plays the same game, stops advertising "the n-th fastest
> computer in the world" based on a single benchmark (Linpack), I'll back off on
> the snarling insults about the practice.

I some knowledge of supercomputer procurement. Not in the top 10, but in the top 100. The clients were very specific about their workloads, and gave a dozen or so which were run by their users for their acceptance suite. Their top500 submission was fun because it gave an "Nth fastest supercomputer" tag, but it was at the bottom of the priority list (and I don't think it was required for acceptance).

Do other HPC sites really just start out by wanting to reach #1 (or some top500 goal), and not have any real ideas about how the machine will be used? I highly doubt it.

>
> I have posted about this general
> issue and about the poverty of non-local bandwidth on a particular forum on
> Usenet at length. I have already argued at length about why gigantic computer
> centers with lousy interconnect are in the interest neither of science nor of
> the national purse. They do serve the interest of some of the players I have
> already mentioned. Someone has responded, and probably the person who
> identified himself here as forestlaughing, that these gigantic machines are
> actually throughput machines that are rarely employed in their actual
> giganticness, except to deal with many users under a single bureaucracy. To
> that argument, I have no answer except that my experience with those gigantic
> bureaucracies has never been positive.
>
> As an introduction to my lengthy
> involvement in this controversy, you may want to endure the following thread on
> comp.arch:
>
> https://groups.google.com/group/comp.arch/browse_thread/thread/225
> ae7ff9050a027/71fbb4d1cdd9651c?hl=en&q=Gordon+Bell+group:comp.arch+author:Ro
> bert+author:Myers#71fbb4d1cdd9651c

GPGPU? Assembly programming? Pure streaming? Sounds fishy.

Custom interconnects can be (and are) used.

Custom chips can be and have been used, software could use vectors, chip bandwidth could be increased. But the fact is that more bang for the buck can be had almost everywhere by using commodity CPUs, or in the case of BG, custom HPC chips which look more like commodity chips (i.e., not huge bandwidth, massive vector, no caches). Because that actually works better. Caches do work for many compute codes.

Custom vectors with gigantic memory bandwidth were well on the way out before top500 list started. Although again, there are custom things out there which some people use, because I guess their workloads really don't fit with traditional CPUs.

http://en.wikipedia.org/wiki/SX-9

And actually you also see installations going the other way too. MD-Grape for example was a custom CPU had massive copmute, but did not have large memory or interconnection bandwidth and sent the same data to multiple pipes (so is not like a traditional vector either). So not everyone wants vectors and bandwidth.

So if the top supercomputers are just about getting #1 spot, wouldn't we see a vibrant community of vectors and weird and wonderful streaming processors programmed in assembly as we go down the list? Or does everyone not care about real computing, but just want a spot somewhere (anywhere) on the list? "Who cares about biochemical simulations, let's spend all our money to get #158 on top500". No, that does not happen. And there are a significant number of private organizations down this list too, you know.

Very few of them even use GPUs let alone full custom CPUs.


>
> I went so far as to enlist the aid of some
> comp.arch participants, who have been generous with their time and their
> patience with the fact that I am not an actual computer architect. When IBM
> bailed on Blue Waters and I seemed to be the only one saying that the Emperor
> was plainly walking naked, I gave up.

Blue Waters was not revolutionary. It was a "commodity" non-vector POWER7 CPU with a custom interconnect. What was good about Blue Waters that is no good with BG/Q or K supercomputer?
< Previous Post in ThreadNext Post in Thread >
TopicPosted ByDate
New article: Intel's Near-Threshold ComputingDavid Kanter09/18/12 12:26 PM
  Higher SRAM voltage and shared L1Paul A. Clayton09/18/12 02:38 PM
    Higher SRAM voltage and shared L1David Kanter09/18/12 05:20 PM
      Higher SRAM voltage and shared L1Eric09/20/12 10:44 AM
        Higher SRAM voltage and shared L1David Kanter09/20/12 12:24 PM
      Yes, that kind of asynchronousPaul A. Clayton09/20/12 02:53 PM
    Higher SRAM voltage and shared L1somebody09/19/12 09:27 AM
      So micro-turboboost is doubly impracticalPaul A. Clayton09/20/12 02:53 PM
  Big littleDoug S09/18/12 03:04 PM
    Big littleDavid Kanter09/18/12 04:05 PM
    Big littleRicardo B09/19/12 04:06 AM
  New article: Intel's Near-Threshold Computingdefderdar09/18/12 09:39 PM
    New article: Intel's Near-Threshold Computingtarlinian09/19/12 08:32 AM
      New article: Intel's Near-Threshold ComputingDavid Kanter09/19/12 10:44 AM
  New article: Intel's Near-Threshold ComputingMark Christiansen09/19/12 11:31 AM
    New article: Intel's Near-Threshold ComputingChris Brodersen09/19/12 12:54 PM
  New article: Intel's Near-Threshold ComputingEric09/20/12 10:47 AM
  Latency and HPC WorkloadsRobert Myers10/03/12 10:52 AM
    Latency and HPC Workloadsanon10/03/12 06:50 PM
      Latency and HPC WorkloadsRobert Myers10/04/12 10:24 AM
        Latency and HPC WorkloadsSHK10/08/12 05:42 AM
          Latency and HPC WorkloadsMichael S10/08/12 01:59 PM
            Latency and HPC WorkloadsSHK10/08/12 02:42 PM
              Latency and HPC WorkloadsMichael S10/08/12 05:12 PM
                Latency and HPC Workloadsforestlaughing10/15/12 08:41 AM
                  The original context was Micron RLDRAM (NT)Michael S10/15/12 08:55 AM
                    The original context was Micron RLDRAMforestlaughing10/15/12 10:21 AM
              Latency and HPC Workloads - Why not SRAM?Kevin G10/09/12 09:48 AM
                Latency and HPC Workloads - Why not SRAM?Michael S10/09/12 10:33 AM
                  Latency and HPC Workloads - Why not SRAM?SHK10/09/12 12:55 PM
                    Why not SRAM? - CapacityRohit10/09/12 09:13 PM
                  Latency and HPC Workloads - Why not SRAM?Kevin G10/09/12 03:04 PM
                    Latency and HPC Workloads - Why not SRAM?Michael S10/09/12 04:52 PM
                      Latency and HPC Workloads - Why not SRAM?Robert Myers10/10/12 10:11 AM
                        Latency and HPC Workloads - Why not SRAM?forestlaughing10/15/12 08:02 AM
                          Latency and HPC Workloads - Why not SRAM?Robert Myers10/15/12 09:04 AM
                            Latency and HPC Workloads - Why not SRAM?forestlaughing10/16/12 09:13 AM
                          Latency and HPC Workloads - Why not SRAM?SHK10/16/12 08:12 AM
                    Latency and HPC Workloads - Why not SRAM?slacker10/11/12 01:35 PM
                      SRAM leakageDavid Kanter10/11/12 03:00 PM
          Latency and HPC Workloadsforestlaughing10/15/12 08:57 AM
            Latency and HPC WorkloadsRobert Myers10/16/12 07:28 AM
              Latency and HPC WorkloadsMichael S10/16/12 07:35 AM
              Latency and HPC Workloadsanon10/16/12 08:17 AM
                Latency and HPC WorkloadsRobert Myers10/16/12 09:56 AM
                  Supercomputer variant of Kahan quotePaul A. Clayton10/16/12 11:09 AM
                    Supercomputer variant of Kahan quoteanon10/17/12 01:17 AM
                      Supercomputer variant of Kahan quoteRobert Myers10/17/12 04:34 AM
                        Supercomputer variant of Kahan quoteanon10/17/12 05:12 AM
                          Supercomputer variant of Kahan quoteRobert Myers10/17/12 02:38 PM
                            Supercomputer variant of Kahan quoteanon10/17/12 05:24 PM
                              Supercomputer variant of Kahan quoteRobert Myers10/17/12 05:45 PM
                                Supercomputer variant of Kahan quoteanon10/17/12 05:58 PM
                                Supercomputer variant of Kahan quoteanon10/17/12 05:58 PM
                                  Supercomputer variant of Kahan quoteRobert Myers10/17/12 07:14 PM
                                    Supercomputer variant of Kahan quoteanon10/17/12 08:36 PM
                                      Supercomputer variant of Kahan quoteRobert Myers10/18/12 09:47 AM
                                        Supercomputer variant of Kahan quoteanon10/19/12 02:34 AM
                                          Supercomputer variant of Kahan quoteanon10/19/12 04:47 AM
                                          Supercomputer variant of Kahan quoteRobert Myers10/19/12 03:14 PM
                        Supercomputer variant of Kahan quoteMichael S10/17/12 06:56 PM
                          Supercomputer variant of Kahan quoteanon10/17/12 09:02 PM
                            Supercomputer variant of Kahan quoteRobert Myers10/18/12 01:29 PM
                              Supercomputer variant of Kahan quoteanon10/19/12 02:27 AM
                                Supercomputer variant of Kahan quoteRobert Myers10/19/12 07:24 AM
                                  Supercomputer variant of Kahan quoteanon10/19/12 08:00 AM
                                    Supercomputer variant of Kahan quoteRobert Myers10/19/12 09:28 AM
                                      Supercomputer variant of Kahan quoteanon10/19/12 10:27 AM
                              Supercomputer variant of Kahan quoteforestlaughing10/19/12 10:26 AM
                                Supercomputer variant of Kahan quoteRobert Myers10/19/12 07:04 PM
                                  Supercomputer variant of Kahan quoteEmil Briggs10/20/12 04:52 AM
                                    Supercomputer variant of Kahan quoteRobert Myers10/20/12 07:51 AM
                                      Supercomputer variant of Kahan quoteEmil Briggs10/20/12 08:33 AM
                                        Supercomputer variant of Kahan quoteEmil Briggs10/20/12 08:34 AM
                                          Supercomputer variant of Kahan quoteRobert Myers10/20/12 09:35 AM
                                            Supercomputer variant of Kahan quoteEmil Briggs10/20/12 10:04 AM
                                              Supercomputer variant of Kahan quoteRobert Myers10/20/12 11:23 AM
                  Latency and HPC Workloadsanon10/16/12 06:48 PM
                    Latency and HPC Workloadsforestlaughing10/19/12 11:43 AM
              Latency and HPC Workloadsforestlaughing10/19/12 09:38 AM
                Latency and HPC WorkloadsRobert Myers10/19/12 11:40 AM
                Potential false economics in researchPaul A. Clayton10/19/12 12:54 PM
                  Potential false economics in researchVincent Diepeveen10/20/12 08:59 AM
                  Potential false economics in researchforestlaughing10/23/12 10:56 AM
                    Potential false economics in researchRobert Myers10/23/12 07:16 PM
Reply to this Topic
Name:
Email:
Topic:
Body: No Text
How do you spell blue?