SDRAM Bank Interleaving – What is It?

Pages: 1 2 3 4 5 6

A Step Forward in Time.

Our previous example chip was OK in the early days. But it only had one bit of output. Eight chips to the module gave eight bits of output, which meant that you needed to add memory modules four at a time for the then current 386 and 486 designs with a 32-bit address bus. But Pentium was coming – and it had a 64-bit address bus. Using the above scheme, you would have had to add memory eight modules at a time. Something had to be done.

The solution as they say, was simple. Remember that the first module was a 4Mb x 1 was a 4 megabit chip – that is 4 x 1 = 4Mb. The last number ( the 1) is the number of arrays on the chip. Each array has one bit of output. To have the memory module output eight bits at a time, simply add more arrays.

Now, let’s say you have a 16Mb DRAM in a 2M x 8 configuration. This means you have eight arrays, each of which is two megabits in size – 2Mb x 8 = 16Mb. When the CPU request a block of memory, the row and column address is the same for each array, you just get one bit out of each array. With memory arranged thus (eight bits per chip), I can get 64-bits for a module.

But I Want More!

Don’t we all. As memory chips become larger (i.e., 64Mb), the number of rows/columns starts growing again. The solution to this? Break up the memory into multiple array “groups” – or internal “banks”. A 2-bank 64Mb (8Mb x 8) chip would have two sets of arrays, with each array being 4Mb (1024 x 4096), or 32Mb per “bank” (4Mb x 8). A 4-bank 64Mb chip would have four sets of arrays, with each array being 2Mb in size (or 16Mb per bank).

This has several advantages.

  1. It allows you to keep the size of each array smaller, limiting the row/column address pins (less cost remember).
  2. It allows you to keep only a certain number of cells ‘open’ or active at a time, saving power.
  3. It allows you to hide the amount of time to precharge the arrays by accessing one during precharge of another.
The disadvantage, of course, is that hitting a closed bank is a performance problem…

The Disadvantages

SDRAM can only have a certain number of banks “open” at one time. Other (adjacent) banks are closed. In our first example (our 4M x 1 chip), the entire chip was one “bank”, and always “open”. But with our multi-bank chips, only one bank can be open at a time.

Say we have a four bank (internal) chip. To reduce power and heat, we only have one of the four banks “open” or active at a time. The others are inactive or closed. Getting memory out of an active bank is quick and easy.

When a bank is open, and you actually want the memory cell in another (closed) bank, the open bank must be closed, the bank that you want data from must be opened and then (and only then) can you get access to the data you want by sending the address (row and column) data to the now open bank. This is where interleaving shows its performance advantage.

If the data you want is in the open bank, you don’t have to go through the “close the open bank”, “open the closed bank that you want” routine. If it isn’t, you can hide the time it takes to open a closed bank by sending the commands intelligently.

here’s what a “dumb” controller would do

  • Send row/column address
  • Read data (controller knows that more is being requested so)
  • Close bank
  • issue an open bank command to next bank where data is stored
  • Send row/column address
  • Read more data.
  • Close bank

Where the time is wasted is in the waiting for the close and open commands to complete. Remember that your CPU is probably in the half to two Gigahertz range, while memory is running at 100 or 133 MHz. While the CPU is waiting for data, it’s doing nothing.

If your memory and chipset controller can take advantage of memory interleave, what happens is this:

  • Send row/column address (controller knows that more is being requested so)
  • Issue open bank command on next bank
  • Read data
  • issue Close bank command (to currently open bank)
  • Send row/column address to bank that is now open (step 2)
  • Read more data.

See what’s happened? A lot of the wait time has been hidden because some things aren’t dependent and can be done in the background while other commands are executing. With this scheme, you shave a few memory cycles that the CPU would ordinarily have to wait idle while closing and opening banks. But this saving would happen each time you needed to go to main memory. Remember the earlier explanation of why the RAS setting did not affect performance as much as CAS because we could hide the time taken to generate the Row Address Strobe in most cases – this is a similar “trick”.

Pages: « Prev   1 2 3 4 5 6   Next »

Be the first to discuss this article!