A Better Crystal Ball

Pages: 1 2 3 4 5 6 7 8

Conclusion and References

The probabilistic nature of instruction execution times on modern computer hardware is a far cry from the early days of the microprocessor when code performance was readily calculable. Although the execution time of individual instructions is now effectively unpredictable, the fact is that it takes the execution of millions of instructions to perform tasks at a level discernable to the computer user. That means it is possible to employ statistical methods to model computer performance.

A simple semi-empirical performance model for estimating the effect of changes to processor clock frequency was derived. Unfortunately the large number of software and hardware variables, aside from processor clock frequency, that must be held constant in order to fit observed data limits the general applicability of the technique. Individual SPEC2k benchmark scores for the Pentium 4 processor were used to test the ability of the model to predict changes in performance from raising or lowering the clock frequency. The model appears to work reasonably well although it is hard to discern the effect of model inaccuracy from potential differences in P4 architectural performance that may have occurred between the 1.3, 1.4, and 1.5 GHz data points and the 1.7 GHz data point.

References

[1] ‘The 8086 Family User’s Manual’, Intel Corp., 1979, p. 2-55.

[2] Standard Performance Evaluation Corporation web site, http://www.specbench.org


Pages: « Prev  1 2 3 4 5 6 7 8  

Be the first to discuss this article!