Will You Still Need Me When I’m 64?
There are a lot of misconceptions about what exactly are the benefits of 64 bit computing. There are two main capabilities 64-bit processors have that 32-bit processors don’t. The first is the ability to directly perform flat addressing of more than 4 Gbytes of memory using 64-bit logical addresses. The second is the ability to load, store, and perform arithmetic and logical operations on 64-bit integer and pointer data types using single instructions. The implementation costs associated with 64-bit ISAs are very modest relative to 32-bit implementations. Even in low cost processors for the embedded control market, the extra logic needed to implement 64-bit wide register files and datapaths has a negligible effect on chip area and clock rate.
The important thing to remember about current 64-bit RISC processors is that they support both 32 and 64-bit data manipulation and are not any less efficient than 32-bit processors in handling 32-bit data. The second is that most can support both 32-bit and 64-bit addressing and pointers. Address size and integer data size are orthogonal capabilities and it is possible, for example, to use 64-bit addresses and 32-bit data or vice versa in a program. And 32-bit and 64-bit integer data can be freely used in the same program just as single precision and double precision floating point data formats are. Most existing 64-bit processors could even handle both 32 and 64-bit addressing in the same program if the programming tool set supported it. It is very likely that x86-64 will duplicate the flexibility 64-bit RISC processors have shown for handling both 32-bit and 64-bit applications in a painless fashion.
For large scale programs used in database management and computer aided design, the ability to directly address more than 4 Gbyte of memory can greatly speed up performance compared to the more programmer visible, explicit, and non-portable mechanisms needed to manipulate large data sets and/or extended memory capacity on 32-bit processors. However, there is a small performance disadvantage to using 64-bit addressing in programs whose data and code fit comfortably in a 4 Gbyte address space. With 64-bit addressing every pointer takes up 4 more bytes of storage in memory and on disk. For a given page size, amount of cache, and number of TLB entries, the use of 64-bit addressing will increase miss rates and reduce performance. According to a 1994 study (‘Performance Implications of Multiple Pointer Sizes’, J. Mogul et al, DECWRL), the use of 64-bit addressing reduces application performance by 2 to 10% with the average of about 5%.
The performance benefits of being able to perform arithmetic and logical operations on 64 bit data items is strongly application dependent. Applications in cryptography and discrete optimization often manipulate integers and bitfields hundreds of bits or more in size. These applications sometimes see performance increases of 2x or more when compiled for and run on a 64-bit platform vs. a 32-bit platform. Even programs that never manipulate integers larger than 32 bits can incur a small performance benefit from being compiled for and run on a 64-bit processor. This occurs when the compiler generates ‘hidden’ 64-bit data size instructions for programs that make use of composite or aggregate data types such as strings, bitfields, records, and fixed size pass-by-value arrays. Although SIMD instruction set extensions, such as SSE and Altivec, can offer some capabilities for performing similar optimizations in 32-bit processors, these typically treat 64-bit data as a ‘second class’ citizen with limited operations available and inability to use the general purpose register set.
Be the first to discuss this article!