As previously mentioned, most of the data used in this article has been normalized. Specifically, all the data is normalized to the number of instructions retired, which we report in absolute value below in Figure 2.
Figure 2 – Raw Instructions Retired
By itself, this data isn’t particularly insightful, so don’t read too much into the graph. The variation between the high and low quality graphics is due to increased load on the CPU – as scenes get more detailed, the CPU spends more time in the driver JITing DirectX or OpenGL code and also transfer more commands and information to the GPU.
The differences between the AMD and Intel CPUs are due to both the profiling tools used and the executables. The differences between the tools were already mentioned, specifically the way they determine sample frequency.
Another artifact that may be present is that since no one code path will be optimal for all CPUs, some games use two separate code paths, one for Intel CPUs and one for AMD. For instance, if the Intel code path used vectorized SSE instructions while the AMD code paths used x87 (which would make sense for the respective CPUs used here), the Intel code path would end up having fewer instructions.
Anyway, the bottom line is that this graph isn’t super important, but is good to keep in mind as we turn to more enlightening and important data.