The Golden Age of Performance Prediction
One of the most important measures of the degree to which the nature of a complex phenomena is understood is the ability to make accurate predictions based on mathematical models. We can judge celestial mechanics to be well understood because astronomers can predict the position of the planets within our solar systems years in advance with very high accuracy. On the other hand, the geological processes governing catastrophic events like earthquakes and volcanic eruptions are poorly understood and difficult to model, hence the experts in that field have next to no predictive abilities.
Predicting the effects of changing system parameters on the performance of microprocessor based computer systems was largely on the same level as celestial mechanics in the days of 8 and 16 bit processors. Those devices and their associated memory system operated in a serial manner, and programmers could calculate the exact number of clock cycles a particular path through their code would take using a table of instruction cycle counts, a pencil, and a piece of paper. The effect on performance of changing the processor clock frequency or the number of memory wait states was readily and accurately predictable.
The era of analytic microprocessor performance estimation has ended for many reasons. Processors became more sophisticated and started to improve performance by partially overlapping the execution times of specific combinations of instructions. New long latency instructions like multiply and divide appeared, whose cycle counts could not be predicted a priori because it depended on the actual data value(s) in a non-transparent fashion. For example, a 16 bit divide instruction takes between 165 and 184 clock cycles to execute on an 8088. [1]
The appearance of caches also greatly complicated analytic performance estimation because the cycle count of an instruction would vary greatly depending on the hit or miss outcome of the instruction fetch(es) to obtain the instruction, and any associated data accesses that instruction performs. Even memory access time became harder to predict with the advent of new types of memory like fast page mode DRAM, which had a faster access mode available for the memory controller to use when the addresses of consecutive memory accesses were close together.
Be the first to discuss this article!