By: wumpus (lost.delete@this.in-a.cave.net), January 29, 2017 7:57 am
Room: Moderated Discussions
Michael S (already5chosen.delete@this.yahoo.com) on January 29, 2017 2:43 am wrote:
> Per Hesselgren (perhesselgren.delete@this.yahoo.se) on January 28, 2017 6:49 am wrote:
> > Mark Roulo (nothanks.delete@this.xxx.com) on January 27, 2017 9:02 am wrote:
> > > David Kanter (dkanter.delete@this.realworldtech.com) on January 27, 2017 6:46 am wrote:
>
> IMHO, it's quite clear, that inclusion of current form of eDRAM cache, i.e. off-die memory-side
> cache, to general-purpose-oriented dual/quad-socket Xeon lines is *not* a good idea.
>
> For current form of Xeon-D is also does not sound as a good idea, but for
> a different reason - [current form of] Xeon-D is very cost-sensitive.
>
Is there any advantage of off-die eDRAM over conventional DRAM? Obviously Intel fabs can manufacture off-chip eDRAM (and can't manufacture DRAM unless they bought a DRAM fab), but my understanding is that eDRAM gives up considerable density vs. DRAM. I'd also be fairly surprised if there is all that much a latency advantage (unless the DRAM manufacturers aren't interested in adjusting their masks for such small runs).
I'd also wonder if increasing the banks of DRAM can still reduce latency (there was a company trying to peddle "single transistor SRAM" by doing this trick, but that was long ago). It could make HBM caches much more interesting (there has been some hype about some Zen chips using this), especially if it helps "core spamming" by reducing the bandwidth requirements of main memory. No idea if increasing [extreme] caches works well enough with databases to help the server market.
> Per Hesselgren (perhesselgren.delete@this.yahoo.se) on January 28, 2017 6:49 am wrote:
> > Mark Roulo (nothanks.delete@this.xxx.com) on January 27, 2017 9:02 am wrote:
> > > David Kanter (dkanter.delete@this.realworldtech.com) on January 27, 2017 6:46 am wrote:
>
> IMHO, it's quite clear, that inclusion of current form of eDRAM cache, i.e. off-die memory-side
> cache, to general-purpose-oriented dual/quad-socket Xeon lines is *not* a good idea.
>
> For current form of Xeon-D is also does not sound as a good idea, but for
> a different reason - [current form of] Xeon-D is very cost-sensitive.
>
Is there any advantage of off-die eDRAM over conventional DRAM? Obviously Intel fabs can manufacture off-chip eDRAM (and can't manufacture DRAM unless they bought a DRAM fab), but my understanding is that eDRAM gives up considerable density vs. DRAM. I'd also be fairly surprised if there is all that much a latency advantage (unless the DRAM manufacturers aren't interested in adjusting their masks for such small runs).
I'd also wonder if increasing the banks of DRAM can still reduce latency (there was a company trying to peddle "single transistor SRAM" by doing this trick, but that was long ago). It could make HBM caches much more interesting (there has been some hype about some Zen chips using this), especially if it helps "core spamming" by reducing the bandwidth requirements of main memory. No idea if increasing [extreme] caches works well enough with databases to help the server market.