Article: Parallelism at HotPar 2010
By: Richard Cownie (tich.delete@this.pobox.com), August 5, 2010 8:12 am
Room: Moderated Discussions
Rohit (@.) on 8/5/10 wrote:
---------------------------
>Taping out multiple chips with different core counts/caches from a same architecture,
>is a relatively novel phenomena. I am pretty sure that in RISC days, the variety was much lower.
>
>OTOH, in a gpu line up there are typically 4-5 chips in a generation. The lowest
>end ones typically receive the least R&D effort and are released at the end.
This is another way that CPU's have become more like GPU's:
nowadays you develop a core architecture, but make at
least 4 variants with different core counts: a single-core
for cheap or power-constrained laptops, a dual-core for
mainstream laptops and desktops, a quad-core for power
users, and a hex-core. And then there are differences at
the higher end between high-clocked desktop/workstation
chips and throughput-oriented server chips as well.
With Atom and soon Bobcat, there are going to be
specialized low-power core designs at the bottom end.
But then the high end is going to get higher, so we're
probably going to have chips with 1, 2, 3, 4, 6, 8
cores.
>So from purely the R&D perspective, the NRE is relatively >low on the <$100 card market.
But that's a fiction. You had to spend a ton of money
to develop the GPU architecture and the drivers. You can
mess around with the accounting in various subjective ways,
but if that money wasn't spent then you couldn't sell
those low-end cards.
>I guess what I am trying to say is that GPUs are not going to vanish from the HPC
>market even if nobody writes HPC codes for them. RISCs had nothing like that going for them.
Oh, I agree. They're not going to vanish. They're very
good at some apps. And even if the speedup is only 3x
or 5x rather than 80x, if it makes the difference between
needing a $50M machine vs a $15M machine then it's
quite handy. But are they going to dominate HPC - say,
in the way that Cray's vector machines did in the late
1970s, early 1980s ? I think not. And are the GPUs
that go in those HPC compute farms going to be discrete
GPUs on separate (PCIe or whatever) cards ? Or are they
going to be whatever GPU-like hardware gets integrated
onto AMD and Intel cpu+gpu chips ?
Of course a discrete GPU chip can be bigger and hotter
and can have more DRAM bandwidth. But I'm not sure that
the strategy of building the GPU as a huge chip on a foundry
process will beat the strategy of building the GPU as
half of a huge chip in a more advanced CPU process, with
less bandwidth to memory but higher bandwidth to the cpu.
---------------------------
>Taping out multiple chips with different core counts/caches from a same architecture,
>is a relatively novel phenomena. I am pretty sure that in RISC days, the variety was much lower.
>
>OTOH, in a gpu line up there are typically 4-5 chips in a generation. The lowest
>end ones typically receive the least R&D effort and are released at the end.
This is another way that CPU's have become more like GPU's:
nowadays you develop a core architecture, but make at
least 4 variants with different core counts: a single-core
for cheap or power-constrained laptops, a dual-core for
mainstream laptops and desktops, a quad-core for power
users, and a hex-core. And then there are differences at
the higher end between high-clocked desktop/workstation
chips and throughput-oriented server chips as well.
With Atom and soon Bobcat, there are going to be
specialized low-power core designs at the bottom end.
But then the high end is going to get higher, so we're
probably going to have chips with 1, 2, 3, 4, 6, 8
cores.
>So from purely the R&D perspective, the NRE is relatively >low on the <$100 card market.
But that's a fiction. You had to spend a ton of money
to develop the GPU architecture and the drivers. You can
mess around with the accounting in various subjective ways,
but if that money wasn't spent then you couldn't sell
those low-end cards.
>I guess what I am trying to say is that GPUs are not going to vanish from the HPC
>market even if nobody writes HPC codes for them. RISCs had nothing like that going for them.
Oh, I agree. They're not going to vanish. They're very
good at some apps. And even if the speedup is only 3x
or 5x rather than 80x, if it makes the difference between
needing a $50M machine vs a $15M machine then it's
quite handy. But are they going to dominate HPC - say,
in the way that Cray's vector machines did in the late
1970s, early 1980s ? I think not. And are the GPUs
that go in those HPC compute farms going to be discrete
GPUs on separate (PCIe or whatever) cards ? Or are they
going to be whatever GPU-like hardware gets integrated
onto AMD and Intel cpu+gpu chips ?
Of course a discrete GPU chip can be bigger and hotter
and can have more DRAM bandwidth. But I'm not sure that
the strategy of building the GPU as a huge chip on a foundry
process will beat the strategy of building the GPU as
half of a huge chip in a more advanced CPU process, with
less bandwidth to memory but higher bandwidth to the cpu.