Intel’s Quick Path Evolved

Pages: 1 2 3

Platform Trends

In August 2007, our groundbreaking report disclosed Intel’s Quick Path Interconnect (nee Common System Interface, CSI), based on an extensive analysis of patent applications. QPI is a specification for a family of coherent interfaces that includes all the different capabilities needed by notebooks, desktops and servers.

Individual products only implement a subset of the features. For instance, many reliability features intended for high-end servers would only hinder a notebook; consuming extra die area and power, while giving no real advantages. In fact, some of the features described in the patents have yet to be implemented in any products. One of the distinct hazards of relying on patents to predict future products is that patents often describe and claim anything of technical interest – regardless of the practical applicability.

QPI was announced in later 2007 and debuted in late 2008, early 2009 across notebooks, desktops and servers. The first generation of processors based on the QuickPath Interconnect include the Nehalem, Westmere and Tukwila families. It was primarily used to connect the microprocessor to I/O Hub (IOH) or to other microprocessors in multi-socket systems.

The biggest boost was for servers and mirrors the benefits that AMD reaped earlier with HyperTransport in the Opteron. Tukwila has 12X the raw interconnect bandwidth of its predecessor, and Nehalem-EX is an impressive 16X compared to Dunnington. In conjunction with the integrated memory controllers, this translated into massive gains in performance and Intel’s resurgence in the server market. However, QPI was also used in consumer products to connect the microprocessor to the memory controller. For example, Arrandale’s CPU connects to the GMCH (which contains the Ironlake GPU and a DDR3 memory controller) through QPI.

Today though, QPI fills a slightly different role, as shown in Figure 1. Sandy Bridge integrates the IOH, sporting 20 lanes of high-speed PCI-E 2.0 and 20GB/s of bandwidth (4 lanes are dedicated for DMI though). High performance devices such as GPUs and RAID controllers can be directly connected to the microprocessor. There are enough PCI-E lanes to handle dual-GPU configurations or a GPU and a fairly high performance network card or storage controller. Since an IOH is no longer needed, there is no reason to include QPI in consumer parts – only PCI-E and Intel’s Direct Media Interface (DMI, for use with the Southbridge) are needed. Going forward, QPI is primarily for servers and workstations.


Figure 1 – Intel Platform Architecture Comparison

To adjust to changes in the industry and system architecture, Intel has announced a second generation Quick Path Interconnect 1.1. This new version is backwards compatible with existing QPI interfaces, which are now collectively referred to as QPI 1.0. All current x86 and IPF platforms use QPI 1.0 – including Nehalem-EP/EX, Westmere-EP/EX and Tukwila. QPI 1.1 further unifies the x86 and IPF flavors, so that Itanium can re-use and benefit more from the tremendous investments in x86 system architecture. The goal for the next generation QPI is higher performance, better efficiency and reliability. To that end, QPI 1.1 has numerous improvements at the electrical, logical and protocol levels. Sandy Bridge-EP and the Romley platform will be the first products to use QPI 1.1, followed by Ivy Bridge-EP/EX, and it is possible that Poulson will follow suit.


Pages:   1 2 3   Next »

Discuss (32 comments)