22nm Design Challenges at ISSCC 2011

Pages: 1 2 3

Differences at 22nm

Overall, there seems to be a fairly firm consensus regarding scaling to the 22nm node. To a large extent this is an illustration of the nature of the challenges: physical phenomena impact everyone equally. Fortunately, there were some points of divergence between the panelists to liven the discussion. These differences largely stem from the economic situations facing leading semiconductor companies.

Both IBM and AMD have used partially depleted silicon-on-insulator (PD-SOI) down to the 32nm node, and Ghavam Shahidi maintained there are sufficient performance and variability benefits going forward. While PD-SOI boosts performance over bulk silicon, it significantly increases the cost of wafers and thus raises the variable manufacturing cost of chips. In essence, PD-SOI reduces the fixed costs to develop a new process technology, but increases the variable costs of manufacturing the resulting chips. One exception to the consensus view on 22nm was IBM’s stance that PD-SOI will continue to be useful for high performance applications. Global Foundries did not seem to have nearly such a sanguine outlook, undoubtedly because foundry customers focus heavily on variable manufacturing costs. Most of the panelists expressed hope for fully depleted SOI at a future node (15nm or below), which eliminates random dopant fluctuation, but it is a fundamentally different technology than PD-SOI.

Everyone seemed to agree that packaging will play an important role in the future, but Global Foundries had an even stronger view. Bill Liu suggested that 3D packaging and integration, particularly through-silicon vias (TSVs) were essential to continue Moore’s Law. While he acknowledged that there were still issues with wafer bumping, he was also far more optimistic about the timeline for viability of TSVs. The other panelists did not seem to think that TSVs were viable at the 22nm node.

Mark Bohr of Intel made a contrarian point about the costs of double patterning, which most of the panelists considered to be unattractive. It seems expensive to reduce throughput by using two exposures on critical layers, since it reduces throughput. However, double patterning significantly reduces capital expenditures, since the expensive lithography equipment can be re-used across future generations. In contrast, using immersion lithography to achieve the same benefits requires new equipment and introduces yield risks. Moreover, judicious use of RDRs can mitigate the number of layers that need double patterning and thus the throughput impact. Additionally, double patterning is manufacturing techniques that works with almost any type of lithography and is a valuable skill to master going forward.

The last and least surprising difference was on 450mm wafers. Intel and TSMC clearly believe that increasing wafer size will significantly improve the cost structure for manufacturing. TSMC has even publicly committed to a 450mm fab for the 20nm node. Global Foundries and IBM were much less enthusiastic about the prospect and were concerned with the increased cost of process technology development and fabs. This is entirely expected since Intel and TSMC have substantially higher volumes and are willing to increase their capital expenditures to reduce variable costs.

Conclusions

One point which was raised by Min Cao, but likely a universal opinion, was the importance of avoiding deterministic methods for designers. Variation has the biggest impact on worst case performance and power. But it is nearly impossible that all the transistors, interconnects, etc. within a single circuit will suffer from the worst case variation. Statistical design using Monte Carlo methods focuses on an entire circuit and considers more realistic ‘worst case’ scenarios and substantially mitigates the impact of variation. However, the computational overhead is significant, so Monte Carlo modeling must be used selectively.

In many respects, the idea that designers should think probabilistically, rather than deterministically highlights the fundamental implication of the panel. At 22nm and beyond, manufacturing can no longer cleanly abstract away the underlying physical challenges of semiconductor scaling. This drives the need for co-optimization between process technology and chip design. The inescapable conclusion is that the physical design of integrated circuits is becoming ever more critical at smaller geometries.

A keen grasp of semiconductor physics means that a design team can more readily anticipate and adapt to the risks and implications of the challenges at 22nm and beyond. This means circuit designers can help to influence architectural choices in the right direction. It will improve the co-optimization process and enable teams to creatively adjust circuits to the needed restrictive design rules. Similarly, changes in process characterization and design rules, or unexpected yield issues are much less disruptive and dangerous to schedules for a team that intimately understands the related trade-offs. The fundamental implication is that engineers and design teams that best understand the underlying physics will likely achieve better performance, power and costs and yielding a significant competitive advantage.

Pages: « Prev  1 2 3  

Discuss (25 comments)