When the term RISC was introduced a second term was created, Complex Instruction Set Computing, or CISC, which was basically a label applied to the existing popular computer architectures such as the IBM S/370, DEC VAX, Intel x86, and Motorola 680×0. Compared to the remarkably similar ISAs of the self-proclaimed RISC architectures, the CISC group was quite diverse in nature. Some were organized around large general purpose register files while others had just a few special purpose registers and were oriented to processing data in situ in memory. In general, the CISC architectures were the product of decades of evolutionary progress towards ever more complex instruction sets and addressing modes brought about by the enabling technology of microcoded control logic, and driven by the pervasive thought that computer design should close the “semantic gap” with high level programming languages to make programming simpler and more efficient.
In some ways CISC was a natural outgrowth of the economic reality of computer technology up until the late 1970’s. Main memory was slow and expensive, while read only memory for microcode was relatively cheap and many times faster. The instructions in the so-called CISC ISAs tend to vary considerably in length and be tightly and sequentially encoded (i.e. the instruction decoder had to look in one field to tell if a second optional field or extension was present, which in turn would dictate where a third field might be located in the instruction stream, and so on).
For example, a VAX-11 instruction varied in length from 1 to 37 bytes. The opcode byte would define the number of operand specifiers (up to 6) and each had to be decoded in sequence because there could be 8, 16, or 32 bit long displacement or immediate values associated with each specifier. This elegant scheme is a delight for VAX assembly language programmers, because they could use any meaningful combination of addressing modes for most instructions without worrying if instruction X supported addressing mode Y. However, it would become a major hurdle to the construction of high performance VAX implementations within a decade after its introduction.
Other CISC architectures, like x86, had a simpler and less orthogonal set of addressing modes but still included features that contributed to slow, sequential instruction decode. For example, an x86 instruction opcode could be preceded by an optional instruction prefix byte, an optional address size prefix byte, an optional operand size prefix byte, and an optional segment override prefix byte. Not only are these variable length schemes complex and slow, but are also susceptible to design errors in processor control logic. For example, the recent “FOOF” bug in Intel Pentium II processors was a security hole related to the “F016” lock instruction prefix byte wherein a rogue user mode program could lock up a multi-user system or server.
To illustrate the large contrast between the instruction encoding formats used by CISC and RISC processors, the instruction formats for the Intel x86 and Compaq Alpha processor architectures are shown in Figure 1. In the case of x86 there is a lot of sequential decoding that has to be accomplished (although modern x86 processors often predecode x86 instructions while loading them into the instruction cache, and store instruction hints and boundary information as 2 or 3 extra bits per instruction byte). For the Alpha (and virtually every other classic RISC design) the instruction length is fixed at 32-bits and the major fields appear in the same locations in all the formats. It is standard practice in RISC processors to fetch operand data from registers (or bypass paths) even as the instruction opcode field is decoded.
Be the first to discuss this article!