After playing around with this benchmark a little bit, I have a feeling that it could provide some valuable information for those who want to try to look ‘under the covers’ and determine exactly where performance improvements are coming from in different systems and components. I’m not sure of the value as a real world performance measurement tool, like Winstone or SysMark, but to determine that will require additional testing and analysis.
I mentioned that the HDD and video memory tests were fairly consistent between platforms because I used the same video card and hard drive in all tests. This made those results fairly uninteresting, except to note that they are, in fact, consistent. I did not run any video performance or video quality tests, which also are available, simply because the video card I used is not intended to do well on 3D graphics, which is what those tests are measuring. One final set of tests that may be interesting to evaluate are the ‘Crunch’ tests, which are basically a CPU, a memory and a video performance test all run at the same time to measure performance when the system is under a ‘full load’. I did record the results of those, but have not taken the time to try and analyze them yet. These tasks will be left to a more in-depth follow up of this benchmark with a wider variety of components.
My impression is that this is a nice addition, and perhaps a replacement, for the many synthetic tests that are utilized by reviewers. It would be interesting to see how the results of this benchmark compare to SIS Sandra, and to some of the memory/cache benchmarks that have been used, such as cachemem, STREAM and Linpack. Since eTesting Labs has decided not to update the Winbench tests, I am hoping this one will turn out to fill that void, as I do not believe there are any others that reliably do so today. Again, this will have to wait for either a later time or another reviewer to examine.
The one part of this benchmark that I am concerned about is the CPU tests. Due to the heavy dependency on how data is accessed from memory, I am not comfortable with the idea of isolating CPU performance from memory accesses, as is attempted here. This is not so much a concern about whether anyone will see the same results in a real application, but whether these focused tests can really measure the ‘true’ performance benefit of features specifically implemented to optimize common memory access patterns. One thought I have is that using both SPEC CPU2000 and PCMark2002 might provide a view from both perspectives and actually give a more accurate picture – but this too will have to wait until I have completed more thorough testing
Be the first to discuss this article!