Gaming performance is affected differently than business applications. In most cases it will be tied to pure CPU power. Now, I have left one variable out, and that is video cards. I had to draw the line somewhere and I think that video cards should be compared on their own, so they are not included here.
So how do the different CPU cores and their different size L2 cache affect gaming, and what affect does CPU speed have on gaming performance?
Notice the almost linear gain for the 256K L2 cache CPU’s, then the almost 6% jump for the 512K version along with a linear gain for the 512K versions. Not much gain in either type when just going up 200 ~ 300MHz, but a sizable gain between the two different 2.0GHz versions.
Interesting, while here we do see a nice steady gain in performance as the CPU gets faster and the L2 cache size doubles we don’t see as big of a jump on the graph between the two different (256K vs. 512K L2) 2.0GHz CPU’s. But an almost 8% gain by just doubling the L2 cache size is pretty good, especially since there is no real cost difference. Sometimes looking at a graph alone can be misleading, you need to also look at the actual scores and maybe even calculate the percentages to really get a good idea of what you are looking at.
Pretty much the same story here. Small gains between 200MHz increments (3.7% between the 1.8 and 2.0), but a large gain when doubling the L2 cache size (15%).
Although not a game, the Video 2000 results are similar. CPU speed along with the L2 cache size are what affects it. Interesting to see how little the L2 cache size affects this test, at 1.6% difference between the two 2.0 CPU’s. Could it be this test is more limited by CPU speed than memory speed? Or could it be that most of the data sets used in the test fit within the smaller cache of the Willamette core (256K)? We also seem to flatten out with the graph, so I wonder if there is another limiting factor such as the video card or memory speed?
Be the first to discuss this article!