The Bank Robber: DRDRAM power consumption
Notice that even though the SDRAM and DRDRAM components in my two examples are both described as having 20 ns page read access latencies, the CPU sees much longer latencies. And in the case of the Rambus system the latency is 12.5 to 20 ns longer than for PC100 SDRAM. One argument that can be made in favour of DRDRAM is that the much larger number of banks allow more active pages and thus a larger chance of performing a page read in the place of a bank read. A 128 Mbyte DIMM containing eight SDRAMs has only 4 banks since the devices operate in parallel, but a 128 Mbyte RIMM with eight DRDRAMs can have 256 banks. Since only a single page in a bank can be open at a time, having lots of banks should allow many pages to be open and reduce the chances of page conflicts between two or more memory access threads of locality (e.g. multiple software processes, AGP or PCI DMA transfers etc.).
Fine in theory but not in practice. First of all, an SDRAM bank read has 90 ns of latency, which is the same as a page read operation in the heavily loaded DRDRAM system. Secondly, a DRDRAM has to be in the ACTIVE state to have open pages. In this state a single memory device can dissipate nearly 4 Watts of power. Obviously leaving open pages strewn throughout a RIMM is a quick path to a memory meltdown. Finally, the 16 or 32 banks in a DRDRAM device are not independent of each other because adjacent banks share the sense amp strips between them, so if bank N is activated then banks N-1 and N+1 cannot be activated. This means that as an absolute limit only half the banks can be open simultaneously. In practice bank conflicts occur on average well before even half the banks are opened.
Too Hot? Jump in the B-pool and Have a Nap!
One of the most important functions of a DRDRAM memory controller is power management. A Direct Rambus memory device has four basic operating modes when it comes to power dissipation. The ACTIVE state is most power hungry, but it allows pages to be left open and accesses to occur with minimum latency. The next lower power state is called STANDBY. In this state the device has its column packet receivers and data transceivers powered down but is ready for bank accesses. Beyond STANDBY is the NAP state, which powers down everything but refresh operations and places the receive and transmit clock delay-locked-loop (DLL) circuits into a special nap state. Finally, the lowest power state is called PDN, or powerdown. This is similar to NAP but completely shuts down the DLL circuits so it has even a longer wake up latency. NAP exit takes about 40 to 50 ns while PDN exit takes from 4 to 8 us.
The Intel 820 chipset implements power management by assigning DRDRAMs to either “pool A” or “pool B”. A-pool devices are in either ACTIVE or STANDBY mode, while “B” pool devices are normally in NAP mode. Hardware configuration registers are used to set the size of the A-pool to either 1, 2, 4, or 8 devices and also set the maximum number of ACTIVE devices in the A-pool to 1, 2, or 4 devices. The 820 supports a maximum of 8 open pages across all DRDRAMs in the system. The 820 can be set to optionally monitor inactivity on the Rambus channel, and if a threshold is exceeded then the least recently used (LRU) device in the A-pool is demoted to the B-pool. If an access is made to a B-pool device, it will need to become ACTIVE and an existing A-pool device may have be demoted to the B-pool to satisfy the active device limit.
As an additional protection against RIMM overheating, the 820 can be set to throttle memory activity based on either continuous checking of the time averaged memory traffic levels or by monitoring the thermal trip sensor in each DRDRAM. This monitoring is performed on a polling basis when the 820 performs an I/O current calibration on each memory device every 100 ms. When throttling is invoked the chipset halts memory accesses once the number performed within the current throttling interval exceed a programmed threshold.
Be the first to discuss this article!