Article: AMD's Mobile Strategy
By: Ricardo B (ricardo.b.delete@this.xxxxx.xx), January 6, 2012 6:12 pm
Room: Moderated Discussions
Paul A. Clayton (paaronclayton@gmail.com) on 1/6/12 wrote:
---------------------------
>Michael S (already5chosen@yahoo.com) on 1/6/12 wrote:
>---------------------------
>>Ricardo B (ricardo.b@xxxxx.xxxx) on 1/6/12 wrote:
>>---------------------------
>>>Single socket latency DRAM is in the 40-50 ns range
>>>nowadays.
>>>
>>>10 ns or less seems quite feasible with SRAM through a
>>>parallel interface.
>>>
>>
>>10ns is not feasible. 25ns - may be, but even that is hard.
>
>How much of that is the off-chip penalty (even with fairly
>tight integration) and how much the access delay of the
>memory itself? (Also would there be any benefit in
Probably, neither.
SRAM array access times are small and do scale down with process.
Grouping and adding a parallel interface doesn't hurt much either.
There are, on the market, 72 Mbit SSRAM chips with <5 ns latency -- which is what prompted me to throw out the stupid number of 10 ns.
But even largish on die L3 caches have 15-20 ns latencies.
I guess the reason is not the memory access itself, it's the lookup in the tags.
---------------------------
>Michael S (already5chosen@yahoo.com) on 1/6/12 wrote:
>---------------------------
>>Ricardo B (ricardo.b@xxxxx.xxxx) on 1/6/12 wrote:
>>---------------------------
>>>Single socket latency DRAM is in the 40-50 ns range
>>>nowadays.
>>>
>>>10 ns or less seems quite feasible with SRAM through a
>>>parallel interface.
>>>
>>
>>10ns is not feasible. 25ns - may be, but even that is hard.
>
>How much of that is the off-chip penalty (even with fairly
>tight integration) and how much the access delay of the
>memory itself? (Also would there be any benefit in
Probably, neither.
SRAM array access times are small and do scale down with process.
Grouping and adding a parallel interface doesn't hurt much either.
There are, on the market, 72 Mbit SSRAM chips with <5 ns latency -- which is what prompted me to throw out the stupid number of 10 ns.
But even largish on die L3 caches have 15-20 ns latencies.
I guess the reason is not the memory access itself, it's the lookup in the tags.