By: Gabriele Svelto (gabriele.svelto.delete@this.gmail.com), April 29, 2013 4:56 am
Room: Moderated Discussions
David Kanter (dkanter.delete@this.realworldtech.com) on April 29, 2013 12:08 am wrote:
> In the last week or so, several sources have suggested that my initial analysis was partially incorrect.
> Rather than using a wide and slow interface for Haswell's eDRAM (e.g., 512-bit @ 1GT/s), it will be
> a narrow and fast interface. Sources also suggested that the bandwidth estimate was a bit low.
Sounds like shades of an FB-DIMM-like interface.
> I don't have any specific details, but the bandwidth estimate is probably accurate to 20-30%.
> it's probably safe to assume significantly faster than DDR3 specs (so figure >2.5GT/s).
It will be interesting to see how much improvement the graphics will get from this. In the past overclocking memory didn't yield significant performance improvements pointing to an already saturated GPU core; this time however Intel should have increased the execution resources quite a bit so extra bandwidth may be more beneficial than in the past.
That being said I still wonder how the eDRAM will actually work. Your article describes it as a last-level cache but I couldn't find any specifics on it. Will the processor have tags on-die to use it effetively as an L4 cache or will it be accessible as a scratchpad memory only for comunicating with the GPU?
> In the last week or so, several sources have suggested that my initial analysis was partially incorrect.
> Rather than using a wide and slow interface for Haswell's eDRAM (e.g., 512-bit @ 1GT/s), it will be
> a narrow and fast interface. Sources also suggested that the bandwidth estimate was a bit low.
Sounds like shades of an FB-DIMM-like interface.
> I don't have any specific details, but the bandwidth estimate is probably accurate to 20-30%.
> it's probably safe to assume significantly faster than DDR3 specs (so figure >2.5GT/s).
It will be interesting to see how much improvement the graphics will get from this. In the past overclocking memory didn't yield significant performance improvements pointing to an already saturated GPU core; this time however Intel should have increased the execution resources quite a bit so extra bandwidth may be more beneficial than in the past.
That being said I still wonder how the eDRAM will actually work. Your article describes it as a last-level cache but I couldn't find any specifics on it. Will the processor have tags on-die to use it effetively as an L4 cache or will it be accessible as a scratchpad memory only for comunicating with the GPU?