You say you want a revolution?
Energy efficient data centers are in the news again, with the EPA reporting that data centers use 1.5% of US electricity – almost 6 million home’s worth – and doubling in five years.
The numbers don’t include the “custom server” power usage of Google, which I estimate will be about 250 MW next year. If Google keeps selling ads, and people keep using Google, I expect that number will grow by ~50 MW a year for the next several years.
Measuring & comparing power use
Forget about global warming, that is a lot of power. Expensive power. Can we cut the power requirement? We could, if we had a way to reliably benchmark power consumption across architectures. Which is what JouleSort: A Balanced Energy-Efficiency Benchmark (PDF) by Suzanne Rivoire, Mehul A. Shah, Parthasarathy Ranganathan and Christos Kozyrakis tries to do. Thanks to alert reader Wes Felter for bringing it to my attention.
As noted in Powering a warehouse-sized computer system power requirements are overstated. Power requirements vary with workload. So how to compare?
The benchmark of the future
The authors chose a sort algorithm:
We choose sort as the workload for the same basic reason that the Terabyte Sort, MinuteSort, PennySort, and Performance-price Sort benchmarks do: it is simple to state and balances system component use. Sort stresses all core components of a system: memory, CPU, and I/O. Sort also exercises the OS and filesystem. Sort is a portable workload; it is applicable to a variety of systems from mobile devices to large server configurations. Another natural reason for choosing sort is that it represents sequential I/O tasks in data management workloads.
JouleSort is an I/O-centric benchmark that measures the energy efficiency of systems at peak use. Like previous sort benchmarks, one of its goals is to gauge the end-to-end effectiveness of improvements in system components. To do so, JouleSort allows us to compare the energy efficiencies of a variety of disparate system configurations. Because of the simplicity and portability of sort, previous sort benchmarks have been technology trend bellwethers, for example, foreshadowing the transition from supercomputers to clusters. Similarly, an important purpose of JouleSort is to chart past trends and gain insight into future trends in energy efficiency.
The authors focused on 2 things at odds with the Google power paper. They chose a benchmark that exercised all components of the system, while the Googlers concluded that CPU utilization was the key variable. They also focused on peak workload power consumption, rather than Google’s strategy model of throttling back at peak loads while reducing idle load consumption.
The differences may be a question of the problems each examined. Google has well-defined workloads with strong time-of-day dependencies. The Stanford/HP Labs team defined a database sequential access workload. My guess is that both approaches are valid for understanding parts of the energy problem and that neither is sufficient for a complete picture.
Update: Another major difference is that the JouleSort team looked at individual servers, while Google’s paper focused on the efficiency of racks, PDU groups and multi-thousand node clusters.
Prototyping an energy efficient server
Any benchmark is a compromise. Much of the paper presents the author’s rationale for their choices, which I trust will be hashed out by people competent to debate them. I could see how some different choices might change the results, but the authors made reasonable choices.
They used the benchmark to evaluate several systems, some “unbalanced” systems such as a laptop they had in the lab and systems “balanced” or configured to meet the needs of the benchmark most efficiently.
They found that unbalanced CPU utilization was quite low, ranging from 1% to 26%. As a result, the system didn’t accomplish much work for the power it consumed.
Since the CPU is usually the highest power component, these results suggest that building a system with more I/O to complement the available processing capacity should provide better energy efficiencies.
Ah, the irony! 40 years after the minicomputer we are back to a batch mainframe I/O-centric architecture. All things old are new again.
Design for efficiency
Storistas will discern that disks and bandwidth are critical to efficiency in this benchmark. To keep the CPU busy requires lots of bandwidth and I/O. At 15 W each, it doesn’t take many enterprise disks to overtake the CPU as the major power sink. The balanced system required 2 trays of 6 disks each to keep a dual-core CPU busy.
Here’s the configuration of a balanced server and note the disk components.
A really efficient server
The team then built a server optimized for the benchmark. The configuration:
Note the consumer CPU and the notebook disk drives. The controller types are more an artifact of the limited choices in motherboards that support mobile chips and lots of disks. Power may indeed be the factor that tips the industry to 2.5″ drives. The power savings are immense over the fast 3.5″ drives.
File systems, RAM and power supplies
The authors looked at some other issues as well.
The benchmark is a sequential sort. The authors found that file systems with higher sequential access rates more efficient. Developers, the time may not too far away when your code is measured on power efficiency.
They also found that reducing the RAM footprint to the needed capacity raised efficiency as well.
They also found that the winning system could use a much smaller power supply and that at loading below 68% the original and replacement power supplies were about equally efficient. The bigger issue is the cost of over-provisioning for data center power. The authors suggest that power-factor corrected power supplies are required to make energy efficient servers economic as well.
The StorageMojo take
As the breadth of the paper suggests, power efficiency requires a holistic understanding of computes, I/O, software, power factors and configuration trade-offs. Some of the supercomputer folks can do this, but the average data center is years away from this level of workload understanding.
Instead research should point to a few things that increase efficiency and reduce consumption across a wide range of workloads and configurations. Mobile CPUs and notebook disks are 2 likely candidates. Software effects will be found significant as well because widely used software affects so many systems.
We should also remember other areas of power waste. I’m an astronomy buff and the amount of energy used across the US to illuminate empty parking lots and untraveled streets is immense and light-polluting. There are many ways we can become more power efficient. Data centers are just one important component in our connected world.
Comments welcome, as always. No more blogging about blogging either: my traffic dropped by almost 20% last week. For StorageMojo quality content beats controversy hands down. I couldn’t be more pleased.