Can Anything Be Done?
Most people believe they understand benchmark numbers, and usually have their favorite publication’s results at their fingertips to point at as proof that component A is better than component B. The sad fact is that there are way too many factors that the average person not only doesn’t take into consideration, but that the publications in question don’t even provide. As a general rule, those performing these benchmark tests are not computer professionals, but are simply guys (or gals) who have the time and motivation to run a bunch of tests. I have to wonder whether most of the popular publications spend the time to optimize the performance of each system they test before doing their benchmark comparisons.
The recent controversy regarding the performance of various DDR chipsets for the Athlon is a case in point, and seems to validate my concerns. Different publications tested the same MSI motherboard, and came up with different results. Investigation revealed that some were using a pre-production BIOS while others were using the current production BIOS. Even when the same BIOS was being used, there were differences… which turns out to be due to different motherboard revisions that result in some different memory timings due to the circuit design. Even after these issues were all taken into consideration, there is some disagreement on the relative performance between the competing chipsets.
In my opinion, part of the problem is that many of the people performing these benchmarks are not true industry professionals, and therefore don’t know how to properly test systems and components. However, they are encouraged to do what they do by product marketing groups and end users who either don’t care about doing it right, or simply don’t understand what it means to do it right. It is unlikely that this situation will change anytime in the near future, and may get worse – because there is money involved, and those making that money (product manufacturers and publications) are not going to change without good reason to do so. Industry professionals generally have the incentive of knowing that if they make the wrong recommendation, their job could be at stake. This is not the case for most publications, so there is no motivation to improve their process unless the readers encourage it. Because most readers assume that authors are experts in the subject they write about, there is generally little negative feedback, and many of those who do provide the feedback are as uninformed as the author is.
There are those who do know how to benchmark properly, of course, and there are ways that people can learn how to do so. For example, BAPCo provides education and training on the use of their benchmarks (for a fee), which few publications I am aware of take advantage of. Of course, if the manufacturer provides feedback on the process, readers assume that the results will be skewed unfairly. So how to resolve this dilemma? It certainly isn’t easy, but the readers are ultimately responsible for giving respectability to various publications, but does the burden lie with them? This would mean that readers need to become as familiar with the process as the authors of the articles, which then begs the question of who is educating whom? On the other hand, industry professionals and publications who know better need to speak up and let users, manufacturers and authors know that the misinformation will not be ignored, and make it a financial incentive to get better. Whether one likes it or not, money has the power to force companies and individuals to modify their behavior, and until it becomes financially beneficial change, most publications will continue to mislead and misinform with their benchmarks, and manufacturers will continue to support and encourage it.
Be the first to discuss this article!