SRAM, Capacitors, and I/O
While the industry has been able to scale combinatorial logic effectively, SRAM and other storage elements have proven much more challenging and Intel is no exception to this rule. The Intel 4 process incorporates two basic SRAM cells, one designed for density and another for high current and lower voltage operation. The high-density SRAM cell (HDC) uses only single fin transistors and is 0.0240µm2, while the 0.0300 µm2 high-current cell (HCC) uses two fins for the passgate and pulldown devices. In addition, the Intel 4 process also includes a larger 8T SRAM cell (referred to as a register file cell) with single fin devices for the bit cell and write port with a dedicated three fin read port that is is 0.0360 µm2. The 8T design has about 6X higher leakage than the baseline HCC cell, but offers dramatically lower read and write energy, saving 5.8X and 11.9X respectively.
As Figure 10 illustrates, the 90th percentile Vmin for the HDC is 0.6V while the HCC operates at 0.55V. Generally speaking, Intel CPU designs tend to use the register file cells for frequently accessed structures, such as the L1 cache data arrays, because the power savings from reducing the read and write energy are so substantial. In contrast, the L2 cache data arrays are less frequently accessed and target greater capacity and are typically implemented using the HCC SRAM cell. The HDC SRAM cell is more likely to be used in arrays with a lower activity factor than the L2 cache that can tolerate a higher operating voltage.
Figure 11 below illustrates the scaling trend for SRAM over several generations of Intel process technologies. Compared to the 10nm process, the Intel 4 SRAM cells scaled by 0.77-0.81X. This is a far cry from the 2X scaling of the logic library, but fairly consistent with results reported by other logic manufacturers such as TSMC and Samsung.
While the density scaling has definitely diminished, Intel successfully reduced the operating voltage by 35mV and 50mV for the HDD and HCC arrays respectively. This translates into reducing active power by 10% and 15%. One way to look at the scaling is that the Intel 4 HCC arrays have similar density (23.8Mb/mm2) to the 10nm HDD arrays (23.6Mbit/mm2), but operate 85mV lower – significantly reducing active power consumption.
Intel also disclosed an improved MIM capacitor that doubles the capacitance per unit area to 376fF/um2. Generally, these capacitors are used in the power delivery network to act as decoupling capacitors that reduce voltage droop and improve clock frequency. This is yet another example of a feature that is very performance focused
The most important product on the Intel 4 node is the Meteor Lake compute tile, which will be integrated into a full product using Foveros – Intel’s 3D die stacking technology. The process supports a 36µm pitch microbump for stacking and the high voltage I/O interfaces such as PCIe or USB will reside on a separate I/O tile. One advantage of this arrangement is that the Intel 4 process does not require complicated thick gate oxide transistors for high-voltage I/O interfaces, again simplifying the scope for process development. The Intel 3 node is likely to support high-voltage I/O transistors given the fuller set of features needed for the foundry business and server products.