Article: Parallelism at HotPar 2010
By: no thanks (no.delete@this.thanks.com), August 4, 2010 7:54 pm
Room: Moderated Discussions
Based on reading this discussion it would seem plausible and even likely that the Nvidia x86 processor rumors that surface from time to time are likely to have some basis in fact. Inexorably, they will be pushed by the technical and business arguments to have a CPU core to integrate in a single chip with their GPU technology.
Assuming the hypothetical x86, the question is could they execute on this? One would assume the good sense to start at the low end with an "Atom killer" since for their first CPU chip they would be advised to keep their ambitions under control.
The major issues would seem to be
1. they've never built an x86
2. they'd need to build a low power part which has not proven to be their forte recently
3. ??? Please fill in your own fear here.
I assume they could buy some x86 expertise from absorbing former Intel/AMD engineers. As well as smaller players such as Via and Transmeta.
Can they build the x86? How do they work around a license? Given an aggressive schedule I assume a simple x86 multicore. That would suggest in order execution without any of the fancy tricks that are the stock in trade of Intel/AMD to increase single thread performance. I could hardly imagine Nvidia having that kind of cultural knowledge given the throughput machine nature of the GPU. This would seem to suggest the unfortunate path of a Transmeta copy machine. The fear would be that competing with Intel or AMD with such a machine comes down to a bet that for a given transistor budget that the Transmeta approach plus Nvidia GPU beats Intel/AMD real x86 cores plus currently week Intel GPU or more powerful AMD one. Given overheads in the software approach and the fact that one company already blew immense amounts of cash failing to compete with Intel suggests this is a bad idea.
An alternative would be to concentrate on the ARM architecture at the low-end and work to attack the x86 from below while keeping discrete GPUs at the highend. Despite initial Tegra difficulties perhaps this is the better route?
Can anyone comment on the relative difficulty of tackling an x86 versus high performance ARM design assuming Nvidia needs to build an integrated CPU and GPU?
Richard Cownie (tich@pobox.com) on 8/3/10 wrote:
>
>I'm pretty NVidia's approach is going nowhere, because
>they're a relatively small company, they've lost marketshare
>and developer mindshare with the DX11 generation, and
>they don't seem to be executing well on the basics of
>shipping chips with high yield and competitive performance-
>per-dollar and performance-per-watt. Couple that with
>the lack of a high-performance CPU to integrate, and they
>just have too many strikes against them. And they're
>bleeding money.
>
>I would agree with you that both Intel's and AMD's approaches
>look like plausible contenders. There's probably room for
>both to survive, though if past history is any guide,
>it will be 80:20 in favor of Intel ... But the market
>is so huge that AMD can live with 20%, and probably make
>large profits if they get it up to 25% or 30%.
>
>Anyhow, it's going to be interesting. And the devil will
>be in the details, which we won't know much about until
>both SandyBridge and Llano come out and people start
>optimizing for them.
>
>
>
Assuming the hypothetical x86, the question is could they execute on this? One would assume the good sense to start at the low end with an "Atom killer" since for their first CPU chip they would be advised to keep their ambitions under control.
The major issues would seem to be
1. they've never built an x86
2. they'd need to build a low power part which has not proven to be their forte recently
3. ??? Please fill in your own fear here.
I assume they could buy some x86 expertise from absorbing former Intel/AMD engineers. As well as smaller players such as Via and Transmeta.
Can they build the x86? How do they work around a license? Given an aggressive schedule I assume a simple x86 multicore. That would suggest in order execution without any of the fancy tricks that are the stock in trade of Intel/AMD to increase single thread performance. I could hardly imagine Nvidia having that kind of cultural knowledge given the throughput machine nature of the GPU. This would seem to suggest the unfortunate path of a Transmeta copy machine. The fear would be that competing with Intel or AMD with such a machine comes down to a bet that for a given transistor budget that the Transmeta approach plus Nvidia GPU beats Intel/AMD real x86 cores plus currently week Intel GPU or more powerful AMD one. Given overheads in the software approach and the fact that one company already blew immense amounts of cash failing to compete with Intel suggests this is a bad idea.
An alternative would be to concentrate on the ARM architecture at the low-end and work to attack the x86 from below while keeping discrete GPUs at the highend. Despite initial Tegra difficulties perhaps this is the better route?
Can anyone comment on the relative difficulty of tackling an x86 versus high performance ARM design assuming Nvidia needs to build an integrated CPU and GPU?
Richard Cownie (tich@pobox.com) on 8/3/10 wrote:
>
>I'm pretty NVidia's approach is going nowhere, because
>they're a relatively small company, they've lost marketshare
>and developer mindshare with the DX11 generation, and
>they don't seem to be executing well on the basics of
>shipping chips with high yield and competitive performance-
>per-dollar and performance-per-watt. Couple that with
>the lack of a high-performance CPU to integrate, and they
>just have too many strikes against them. And they're
>bleeding money.
>
>I would agree with you that both Intel's and AMD's approaches
>look like plausible contenders. There's probably room for
>both to survive, though if past history is any guide,
>it will be 80:20 in favor of Intel ... But the market
>is so huge that AMD can live with 20%, and probably make
>large profits if they get it up to 25% or 30%.
>
>Anyhow, it's going to be interesting. And the devil will
>be in the details, which we won't know much about until
>both SandyBridge and Llano come out and people start
>optimizing for them.
>
>
>