HP’s Struggle For Simplicity Ends at Intel

Pages: 1 2 3

A Renewed Quest For Architectural Simplicity

As is often the case for troubled projects, the initial architects of what evolved into IA-64 started with the best of intentions: researchers at Hewlett Packard Laboratories (HPL) wanted to lead a second RISC revolution. The first RISC revolution, more than a decade earlier, was a reaction to the escalating complexity of computer architectures like the Intel iAPX-432 and DEC VAX-11 made possible by ever increasing amount of microcode.

The first group of revolutionaries, led by John Cocke, David Patterson, and John Hennessy, sought to throw out the edifice of complexity built out of microcode in favour of simplified load/store architectures. Their precept was to shift much the complexity of synthesizing and scheduling high level operations to the programming language compiler so that hardware could stick to the fundamental operations of computation and run quickly and unencumbered. After a short but intense period of controversy the RISC school of thought quickly become recognized as the best way to design general purpose, high performance processors. Virtually every major computer and microprocessor vendor initiated a program to develop its own RISC instruction set architecture (ISA).

The search for ever higher performance eventually led to superscalar processors. That is, processors capable of issuing (beginning the execution of) two or more instructions simultaneously. Unfortunately superscalar capability cannot be continuously extended as semiconductor process engineers offered up an exponentially increasing number of transistors to computer designers for two main reasons. First, increasing the issue width of superscalar processors suffers diminishing performance returns due to stalls (forced pauses to wait for memory accesses or the resolution of dependencies between instructions) occurring more often. Secondly, the amount of logic needed to check for dependencies between instructions generally increases quadratically with issue width.

The first problem was partially dealt with by designing processor implementations that could execute a large number of instructions out of order (up to 80 or more) and track them in hardware to ensure that the computational state could be backed up if required to handle mispredicted branches and raised exceptions. Because the order in which instructions execute at run time isn’t always known in advance this is also called dynamic execution, dynamic scheduling, or just simply out-of-order execution.

Jerry Huck and his team of HPL researchers wanted to initiate a second wave of hardware simplification by once again moving much of this new complexity out of hardware and into the compiler. They aimed to dispose of the two features that are both a blessing and a curse to high end microprocessor designers – dependency checking and dynamic scheduling. To make a long story short, this led ultimately to the development of the PlayDoh architecture test bench system. The PlayDoh ISA has a slew of interesting and innovative new features as well as some rarely used though widely known ones. Most of PlayDoh’s features seem to have been incorporated directly into the IA-64 architecture developed by Intel and HP.

Pages:   1 2 3  Next »

Be the first to discuss this article!