By: Doug S (foo.delete@this.bar.bar), March 27, 2021 5:18 pm
Room: Moderated Discussions
Brett (ggtgp.delete@this.yahoo.com) on March 27, 2021 4:27 pm wrote:
> Ganon (anon.delete@this.gmail.com) on March 25, 2021 5:56 pm wrote:
> > My prediction for the immediate future is similar to what is happening today already.
> > Not everyone buys the SKUs with the most core count: it would be silly to pay for cores roughly
> > beyond what you need to saturate the memory bandwidth in aggregate for your workload.
> > DDR5 will allow you to use effectively 2x the core count because it will eventually result in
> > ~2x bandwidth (3200 -> 6400 and bandwidth efficiency improvements). Not sure what will happen
> > after DDR5 though.
> >
> > Don't think a L4 is cost-effective, though IBM does have it in their mainframes. You would probably
> > better off buying 2+ machines with modest core count and
> > DDR5, rather than 1 complex one with L4 cache, HBM, etc.
>
> RAM on die covers multiple markets, with and without external DRAM.
> Higher peak performance, being 5% faster means you can charge twice as much.
> Higher usable cores per socket, for many use cases.
> More sockets (8) without DRAM for higher rack density, at less power per compute.
> You could also do a shared DDR bank with say four sockets, using an existing chiplet
> north bridge. (Special chip carrier with four dies and north bridge.)
>
> The same die is used whether or not you hook DDR to the die.
> You just give the on die RAM a different name, L4, if DDR is connected.
> L4 is slightly more complex than flat on chip RAM, but the size difference is trivial
> so costs dictate one design. A large page size would be preferable for on chip RAM.
>
> The key to shoot me down is whether or not a reasonable amount of RAM
> can fit on a reticle limited die, I would say 16 gigs is enough.
You can't make standard DRAM on a logic process, you have to use eDRAM which is much less dense than trenched DRAM.
You'd probably be hard pressed to fit 16 gigaBITS of eDRAM onto a reticle limited die on N5. No chance in hell of 16 gigabytes.
> Ganon (anon.delete@this.gmail.com) on March 25, 2021 5:56 pm wrote:
> > My prediction for the immediate future is similar to what is happening today already.
> > Not everyone buys the SKUs with the most core count: it would be silly to pay for cores roughly
> > beyond what you need to saturate the memory bandwidth in aggregate for your workload.
> > DDR5 will allow you to use effectively 2x the core count because it will eventually result in
> > ~2x bandwidth (3200 -> 6400 and bandwidth efficiency improvements). Not sure what will happen
> > after DDR5 though.
> >
> > Don't think a L4 is cost-effective, though IBM does have it in their mainframes. You would probably
> > better off buying 2+ machines with modest core count and
> > DDR5, rather than 1 complex one with L4 cache, HBM, etc.
>
> RAM on die covers multiple markets, with and without external DRAM.
> Higher peak performance, being 5% faster means you can charge twice as much.
> Higher usable cores per socket, for many use cases.
> More sockets (8) without DRAM for higher rack density, at less power per compute.
> You could also do a shared DDR bank with say four sockets, using an existing chiplet
> north bridge. (Special chip carrier with four dies and north bridge.)
>
> The same die is used whether or not you hook DDR to the die.
> You just give the on die RAM a different name, L4, if DDR is connected.
> L4 is slightly more complex than flat on chip RAM, but the size difference is trivial
> so costs dictate one design. A large page size would be preferable for on chip RAM.
>
> The key to shoot me down is whether or not a reasonable amount of RAM
> can fit on a reticle limited die, I would say 16 gigs is enough.
You can't make standard DRAM on a logic process, you have to use eDRAM which is much less dense than trenched DRAM.
You'd probably be hard pressed to fit 16 gigaBITS of eDRAM onto a reticle limited die on N5. No chance in hell of 16 gigabytes.