Chip Multi-Processing: A Method to the Madness

Pages: 1 2 3 4 5

Introduction

While the industry has implemented Chip Multi-Processing (CMP) since 2001, only recently has this technology migrated into the mainstream. The first few designs contained little variety, but as the technology matured, newer and more complex approaches have surfaced. Today’s CMP MPUs are much like the shared memory systems of yesterday (though this should be of no surprise to industry veterans, since CMP merely migrates shared memory systems onto a single chip). Each of the different implementations comes with its own design trade-offs, but also a number of benefits. By changing the level of integration and how the resources are shared or partitioned, each method presents a different set of pros and cons. This article presents an overview of three CMP approaches and explores the compromises and advantages of each. It also offers a speculative look into the future of CMP design.


Figure 1 – Three CMP Implementations

For the purposes of classification, each type of CMP will be described by the lowest level of integration between the processor cores. Going from left to right, Figure 1 shows the Shared Cache CMP, Shared Interface CMP and Shared Package CMP. For the context of this article, the term “I/O” or “I/O interface” will refer to off-chip communications including memory, disk, network, inter-processor communication, etc. Also, note that while in a shared cache architecture, higher levels of cache are shared (L2 or L3), the L1 caches in a CPU are too tightly integrated into the CPU itself to share and are always private.


Pages:   1 2 3 4 5  Next »

Discuss (23 comments)