Of the sessions we surveyed, there is quite a bit of diversity. Intel’s presentation on Merom, though lackluster, focused on a key challenge; reducing power consumption of caches, which are an increasingly large portion of the overall die area. Caches are also, along with I/O, one of the areas of the chip that is guaranteed to receive plenty of attention and custom design. No doubt there are many details being left out of Intel’s presentation for competitive reasons.
One challenge that Intel’s Haifa designers did not have to contend with was integration. Sun and PA Semi both had to contend with the difficulties of full system integration and everything that it entails. The key challenge here is managing several different voltage and clock domains, for caches, cores, and multiple (or in PA’s case reconfigurable) I/Os. This is doubly difficult since modern power saving techniques often involve reducing the voltage and frequency on the fly in the cores and caches.
Power saving of course, is a common theme throughout all MPU designs as almost everyone suffers from thermal and power limits. Clock gating is a given now, although the degree of clock gating varies from project to project. Software power management is also de rigueur; pretty much every design offers some form of software triggered sleep mode. One clear trend is that dynamic, per-part optimization will likely be standard in the not too distant future. While the first MPU to employ such techniques, Montecito, suffered a few missteps, the underlying ideas were present in force. The POWER6 and PA Semi both employ techniques that are similar in concept. In fact, one of the design forums held the day after ISSCC was titled “Adaptive Techniques for Dynamic Processor Optimization”, and included speakers from AMD, IBM, Intel, Texas Instruments and others.
Of all the presentations that we attended, the one that was most clearly aimed at the future was the defect prediction techniques that NEC showed. One of the fundamental issues going forward with CMOS scaling is that soft error rates, on both logic and memory, are expected to increase exponentially. Similarly, process variability is expected to rise dramatically. This means that system designers are inevitably going to be expected to build systems that can tolerate failures. When failure becomes a given over the life time of a product, techniques to predict failures before they occur will undoubtedly be quite useful.
While this concludes our initial coverage of ISSCC 2007, the other notable papers, from Intel, IBM and AMD will also be discussed in later, more detailed articles.