By: David Kanter (dkanter.delete@this.realworldtech.com), February 16, 2013 3:16 pm
Room: Moderated Discussions
> > Doesn't DDR4 mandate only one DIMM per channel? I thought it expanded the number of ranks (stacking) but
> > the tradeoff was the single DIMM, unless you use some type of on-board switch. I don't know nearly enough
> > about DRAM to know what sort of tradeoff there is between the complexity of multiple DIMMs per channel
> > and the complexity of more ranks per DIMM, not to mention a switch. I mention DDR4 because it probably
> > makes more sense to use a DDR4 controller in 64 bit ARM SoC designs unless they're targeted at getting
> > on the market in the next 24 months (in which case they probably need to tape out this summer)
>
> There are many ways to handle this, and no essential barriers to them being widely adopted.
> The basic idea is already apparent in Intel's SMB/SMI architecture.
> You have multiple memory sockets, each of which has an associated on-die "sub-memory controller". This
> part handles the electrical aspects of communicating with the DIMM, but none of the logic. It then communicates
> with the real memory controller via a proprietary bus which can be modified and updated as necessary,
> and is not constrained by the standardization and commodity aspects of the DIMM bus.
>
> (In a way this is like registered DRAM, but done right, moving
> the interface to the board rather than having it on the DIMM.)
This is certainly one way to do it. The challenge is that those buffers take power, which is supposed to be the big advantage of an ARM SoC. I'd wager that using fully buffered memory would eat up any potential power gains and possibly reverse them.
IIRC, buffers take something like 2W...which is a very large percentage of the power of an A15 core.
David
> > the tradeoff was the single DIMM, unless you use some type of on-board switch. I don't know nearly enough
> > about DRAM to know what sort of tradeoff there is between the complexity of multiple DIMMs per channel
> > and the complexity of more ranks per DIMM, not to mention a switch. I mention DDR4 because it probably
> > makes more sense to use a DDR4 controller in 64 bit ARM SoC designs unless they're targeted at getting
> > on the market in the next 24 months (in which case they probably need to tape out this summer)
>
> There are many ways to handle this, and no essential barriers to them being widely adopted.
> The basic idea is already apparent in Intel's SMB/SMI architecture.
> You have multiple memory sockets, each of which has an associated on-die "sub-memory controller". This
> part handles the electrical aspects of communicating with the DIMM, but none of the logic. It then communicates
> with the real memory controller via a proprietary bus which can be modified and updated as necessary,
> and is not constrained by the standardization and commodity aspects of the DIMM bus.
>
> (In a way this is like registered DRAM, but done right, moving
> the interface to the board rather than having it on the DIMM.)
This is certainly one way to do it. The challenge is that those buffers take power, which is supposed to be the big advantage of an ARM SoC. I'd wager that using fully buffered memory would eat up any potential power gains and possibly reverse them.
IIRC, buffers take something like 2W...which is a very large percentage of the power of an A15 core.
David