Article: AMD's Mobile Strategy
By: David Kanter (dkanter.delete@this.realworldtech.com), January 3, 2012 7:54 pm
Room: Moderated Discussions
Bill Henkel (noemail@yahoo.com) on 1/3/12 wrote:
---------------------------
>David Kanter on 12/30/11 wrote:
>> The yields aren't there yet for TSVs.
>
>If TSVs are really not ready for production, AMD could attach the cache chips to
>the same substrate in the package that the processor die is attached to. Maybe
>they would need to tighten up their bump pitch. They could use 2 cache chips for
>a desktop processor and 8 cache chips for a server processor. There might be a
>net savings in system power because of less activity at the DRAM DIMMs. If AMD
>waits until Intel does this, it will be one more reason AMD's chips get stuck in
>the bargain bin while Intel collects all the profit.
The economics don't make sense. As I mentioned in another post, that means you need:
1. An L4 cache controller
2. Pins to connect to the L4 cache, with sufficient bandwidth to handle snooping traffic in servers
The controller and pins use up substantial extra die area and power. So are you going to have a separate CPU die for low-end desktops (no L4), high-end desktops (small L4) and servers (big L4)?
If so, now your validation is 3X worse because you have 3 models. And each separate die will need masks (probably $1-2M/each).
If you use the same die, then you are wasting significant power and area on the high volume (low-end desktops), to improve things for relatively low volume parts (high-end desktop and server).
How many of these do you think AMD can sell, and how much do you think they can increase their prices by?
Let's just recap the costs:
1. More die area for L4 controller
2. More die area for pins
3. More validation for different models
4. External SRAM chips (how much would a 32MB SRAM cost?)
5. More complex packaging, lower total yields
6. Need to design a new snoop filter to deal with larger cache sizes in servers
Those costs are pretty significant.
I'm also very skeptical that the additional performance will be enough to raise ASPs higher than the extra costs. In other words, I suspect it would make AMD less profitable.
>Ricardo B on 12/31/11 wrote:
>> if the L4 is going to be SRAM (although such huge sizes are not
>> realistic for SRAM) and you need a separate die for the L4 models, it
>> would be better to put the L4 cache itself the same die as the CPU.
>
>The cache chips could be SRAM with 32 MBytes per chip. I don't think its possible to put 8 * 32 MBytes = 256 MBytes
>on the processor die any time soon.
How much do you think each of those chips is going to cost? If they are relatively cutting edge silicon (to avoid excessive power consumption), you're talking about hundreds of dollars right there. Which far exceeds the higher prices you could command for desktops.
It might be interesting for servers, but the validation and complexity is VASTLY more difficult there.
David
---------------------------
>David Kanter on 12/30/11 wrote:
>> The yields aren't there yet for TSVs.
>
>If TSVs are really not ready for production, AMD could attach the cache chips to
>the same substrate in the package that the processor die is attached to. Maybe
>they would need to tighten up their bump pitch. They could use 2 cache chips for
>a desktop processor and 8 cache chips for a server processor. There might be a
>net savings in system power because of less activity at the DRAM DIMMs. If AMD
>waits until Intel does this, it will be one more reason AMD's chips get stuck in
>the bargain bin while Intel collects all the profit.
The economics don't make sense. As I mentioned in another post, that means you need:
1. An L4 cache controller
2. Pins to connect to the L4 cache, with sufficient bandwidth to handle snooping traffic in servers
The controller and pins use up substantial extra die area and power. So are you going to have a separate CPU die for low-end desktops (no L4), high-end desktops (small L4) and servers (big L4)?
If so, now your validation is 3X worse because you have 3 models. And each separate die will need masks (probably $1-2M/each).
If you use the same die, then you are wasting significant power and area on the high volume (low-end desktops), to improve things for relatively low volume parts (high-end desktop and server).
How many of these do you think AMD can sell, and how much do you think they can increase their prices by?
Let's just recap the costs:
1. More die area for L4 controller
2. More die area for pins
3. More validation for different models
4. External SRAM chips (how much would a 32MB SRAM cost?)
5. More complex packaging, lower total yields
6. Need to design a new snoop filter to deal with larger cache sizes in servers
Those costs are pretty significant.
I'm also very skeptical that the additional performance will be enough to raise ASPs higher than the extra costs. In other words, I suspect it would make AMD less profitable.
>Ricardo B on 12/31/11 wrote:
>> if the L4 is going to be SRAM (although such huge sizes are not
>> realistic for SRAM) and you need a separate die for the L4 models, it
>> would be better to put the L4 cache itself the same die as the CPU.
>
>The cache chips could be SRAM with 32 MBytes per chip. I don't think its possible to put 8 * 32 MBytes = 256 MBytes
>on the processor die any time soon.
How much do you think each of those chips is going to cost? If they are relatively cutting edge silicon (to avoid excessive power consumption), you're talking about hundreds of dollars right there. Which far exceeds the higher prices you could command for desktops.
It might be interesting for servers, but the validation and complexity is VASTLY more difficult there.
David