In the tech industry, transistor count and transistor density are often portrayed as technical achievements and milestones. Upon the release of a new processor or SoC, many a vendor brags about the complexity of their design, as measured by transistor count. As a recent example, when Apple released the A13 Bionic inside the iPhone 11 generation, the company crowed that it contains 8.5 billion transistors, and in 2006, Intel similarly bragged about Montecito, the first billion-transistor processor.
For the most part, these constantly increasing transistor counts are a consequence of Moore’s Law and the drive to ever greater levels of miniaturization. As the industry moves to newer process technologies, the number of transistors per unit area keeps on rising. For this reason, transistor count is often considered a proxy for the health of Moore’s Law, although that is not quite fully accurate. Moore’s Law in its original form observes that the transistor count of an economically optimal (i.e., minimum cost per transistors) design doubles every two years. But from a customer standpoint, Moore’s Law is really a promise that the processors of tomorrow will be even better and more valuable than the processors of today.
In reality, transistor density varies considerably based on the type of chip and especially the type of circuitry within the chip. Worse yet, there is no standard way of counting transistors and the numbers can vary by 33-37% for the same design. The net result is that transistor count and density are only approximate metrics and focusing on those particular numbers risks losing sight of the bigger picture.
Product Objectives Influence Design Style
The transistor density is intimately related to the overall objectives and design style. Comparing substantially different designs such as a fixed-performance ASIC (e.g., Broadcom’s Tomahawk 4 25.6Tb/s switch chip or Cisco’s Silicon One 10.8Tb/s router chip) and a high-performance datacenter processor (e.g., Intel Cascade Lake or Google’s TPU3) is misleading at best.
An ASIC needs to deliver the targeted throughput, but does not benefit from any incremental frequency. For example, the Cisco Silicon One is intended for high-speed networking using 400Gbps Ethernet and there is no advantage from boosting the frequency by 10%; 400Gbps is the standard set by IEEE, and the next step after that is 800Gbps. As a result, most ASIC design teams will tend to optimize for minimum cost with highly automated design tools, fewer custom circuits, and dense transistors.
In contrast, a faster server chip can usually command higher prices and therefore will always benefit from any incremental frequency. For example, the Xeon 8268 and 8260 are both 24-core parts and the main difference is the base frequency (2.9GHz and 2.4GHz), which translates into about $1,600 difference in list price. The server design team will therefore optimize for frequency. High-speed designs like the server processor tend to use more custom circuit design and larger transistors that have greater drive strength and reduced variability. In modern FinFET-based designs, this translates into more transistors with 2 fins, 3 fins, or even more. In contrast, lower-speed logic like an explicitly parallel GPU or ASICs often employ the densest transistors that use just a single fin, sacrificing clock speed to improve density.