SPECpower_ssj2008This is the first time we’ve used SPECpower_ssj2008, and it’s been quite a pleasure. First off, we should note that it was used in research mode – that is our system set up is not capable of providing valid SPECpower scores. SPECpower requires a separate controller system that drives the system under test (SUT), and interfaces with an extremely high precision power meter. We instead ran the controller software on the system under test itself – which makes little difference in terms of performance, but is nonetheless not valid; additionally we opted for the eminently affordable Watts Up Pro (which retails for around $100), while qualified meters start at around a thousand dollars.SPECpower_ssj2008 measures performance for server side Java, much like SPECjbb2005, but the two workloads are not comparable and the scoring works differently and reports power consumption to boot, so it basically obviates our need for SPECjbb2005 in the future. The software tuning is much the same as SPECjbb – it is a huge knob and heavily dependent on the JVM. Incidentally, the optimal JVM tuning is the same for both SPECjbb and SPECpower, and we reused our command line options and affinity bindings from SPECjbb as well.One of the particularly attractive features of SPECpower is that unlike SPECjbb, it targets specific utilization levels to measure power. We chose to use the standard set of 11 utilization levels – active idle (where the system can accept transactions, but none are being sent by the client/controller) and every 10%, up to full utilization. To score SPECpower, the average ssj_ops over all 11 levels is divided by the average power for all 11 levels – the resulting ratio is the performance to power ratio. Like SPECjbb2005 – we only took a single SPECpower_ssj2008 measurement; but we had run the benchmark perhaps 3-4 times to familiarize ourselves with it. The performance results were steady enough that we felt additional runs were not necessary.
Figure 19 – SPECpower_ssj2008 Performance vs. PowerThe figure above shows performance (in ssj_ops) on the X-axis, with power consumption on the Y-axis; so the best solution would be in the lower right hand corner, and the slope of the curve for each system shows the price (in power) of additional performance. Additionally, it also shows the absolute performance quite clearly – which the charts in SPECpower aren’t quite as good at conveying. While efficiency is certainly a huge part of the equation for IT staff, absolute performance is just as important. It is easy to improve efficiency by using a processor with lower voltage, frequency and power…but if some workloads now require two systems instead of one – that’s not exactly a gain in efficiency.Comparing the two trend lines, the difference between the two generations is clear; for almost any comparable performance level, the Nehalem system can achieve it with 150W less power than the older Harpertown system. In fact, the Nehalem system provides more performance, while using less power than the Harpertown at active idle – quite an impressive accomplishment.
Figure 20 – SPECpower_ssj2008 Performance vs. Power EfficiencyThis chart is a variation on the prior one – instead of showing power on the Y-axis, it shows the performance to power ratio that is the primary efficiency metric for SPECpower. Again, it clearly shows both the trade-offs of running at various utilization levels for a given system, and the advantages and disadvantages of different systems. Bringing home the point about efficient performance – the Nehalem system running at 20% utilization (a very light load) achieves the same efficiency as the Harpertown system under peak load.
Discuss (52 comments)