Systems versus Microprocessors
Kirk Skaugen and Pat Gelsinger forcefully tried to make the point that IPF is really positioned against RISC by noting that their ‘benchmark’ to measure the success of IPF is the system revenue relative to PowerPC and SPARC. To some extent, this is very reasonable. The amount of system revenue is a measure of end user demand, and also a good gauge for how attractive IPF is for hardware and software vendors. Obviously, if you are marketing enterprise software (say analytic software such as SAS), the total system revenue is going to play a key role in determining which architectures to target. The same goes for a vendor which may be considering selling IPF systems.
However, system revenue is only loosely related to Intel’s revenue from Itanium MPUs; Intel might only get 5-20% of the system revenue (and that number depends a lot on how much storage and memory is used, those systems from SGI with 10TB of memory probably don’t get Intel much revenue %-wise). While system revenue is an excellent metric for the overall health of the IPF ecosystem, Intel’s goal is not to make boat loads of money for their partners; it is to make money for their shareholders. So at the end of the year, when Intel does performance reviews, it is a sure bet that the folks in the server division are really going to care about Itanium MPU revenue. It is unquestionably very good news for Intel that IPF system revenue is 42-45% of SPARC and PPC. However, we really would like to know how many MPUs Intel is selling and how many end-users there are.
Stability is Innovation?
Another factor that is largely misunderstood and misconstrued is stability. One of the points that Kirk emphasized was that the Itanium community generally perceives platform stability (this includes compilers, applications, MPUs, chipsets, chassis, pretty much everything) as a huge positive. There are many examples of the emphasis on stability in the product planning process. The most obvious is an unwillingness to change the front-side bus, which started getting long in the tooth around 2003. Despite the performance benefits, upgrading to a modern interconnect would have required the OEMs to produce new chipsets and systems, which was deemed unacceptable. While it is easy to criticize the designers for staying with old technology, it is really hard to accurately estimate the benefits of keeping things the same.
The same story played out with the underlying microarchitecture. Intel has decided to keep using a similar microarchitecture across 4 process generations (180-65nm). The design team acquired from DEC had designed and taped out a 65nm processor using 8 cores, which were likely 3 issue, instead of 6 issue (like McKinley and derivatives). This was ultimately cancelled in favor of a quad core design using the same microarchitecture as Montecito. It seems rather likely that the DEC designed Tukwila would have had higher total performance than the quad-core Tukwila design which is slated to arrive in 2008. However, a new core may have required substantial compiler revisions (this is not guaranteed, it seems like a 3-issue MPU would be alright running code designed for a 6-issue design, just not entirely optimal). Worse yet, the Itanium system integrators would need to go and renegotiate licensing fees with software vendors. This probably would be relatively easy for the big players, like SAP, Oracle and Microsoft, but trying to reach all the application vendors would be tricky, if not impossible.
The bottom line is that as with anything it is a trade-off; less platform stability enables higher performance from innovative technology, but will be more inconvenient for partners and customers. Similarly, extremely stable platforms, like IBM mainframes or Nonstop systems, do not improve performance as fast as more rapidly evolving architectures, but customers seem to prefer this arrangement for particular market segments.