Benchmarking RFC

Benchmarking RFC

The practice of benchmarking computer systems and components aspires to be a science, yet all too frequently appears to be an art and a particularly unreliable one at that. The potential pitfalls of benchmarking are myriad: the urge to declare a ‘winner’, atypical workloads and poorly disclosed system settings frequently plague reviews. While industry standard benchmarks are a step in the right direction, they hardly provide enough information for a purchasing decision. It is a universally accepted fact that if you are purchasing essential systems or software, for example, EDA tools or SAP, that you will have vendors conduct performance demonstrations using your specified combination of hardware, operating systems, compilers and software.

This illustrates one of the fundamental problems in benchmarking; reviewers often try to extrapolate general performance rules based on a very limited set of observations and in the process manage to lose all the useful information that they might have been able to provide. Ultimately, performance means different things to different people; to a physicist, it might mean sustainable double precision FLOPs, but an animator is probably only interested in rendering times for his models and scenes. In each case, the performance is based on the unique combination of applications and requirements that each person places on their systems.

At Real World Technologies, we are redesigning our benchmarking and review process. In doing so, we would like to give our members, readers and everyone in the community the opportunity to have their application benchmarked and profiled. It is your turn to give us feedback and tell us what you would like to see benchmarked, profiled and analyzed. Send an email to reviews@realworldtech.com and let us know what applications we should be benchmarking, and where we can get them. We promise that we will seriously consider each suggestion and try our best to include them in our reviews.


Discuss (One comment)