Power is probably the least understood/most widely used technology in computing. We don’t understand it, – what is a ground loop?- we rarely measure it, and its behavior is a mystery. Labels are no help either. My computer spec is “100-240 V alternating current, 12 A (low voltage range) or 6 A (high voltage range), 50-60 Hz”.

Does it really use almost as much power as a hair dryer? I don’t think so.

Google power
Beginning 5 years ago, Google took the lead in making a power consumption an IT vendor issue. Today, Intel’s power hungry NetBurst architecture is history and power-efficient multi-core architectures are all the rage. Google wasn’t the only factor, but their use of free software, cheap hardware and massive scale meant that energy consumption became one of the few places they could cut costs.

The fact that they are purchasing over a half million servers a year didn’t hurt either.

Using the data-intensive methodologies their scale enables, Google has now published results of the their studies of data center power.

Googlers take a long hard look at power
What are the key determinants of data center power consumption? How can data centers maximize the return on their big investments? What, if any, technologies may help data centers become more efficient? Google now gives us one large-scale data point.

Power Provisioning for a Warehouse-sized Computer by Xiaobo Fan, Wolf-Dietrich Weber and Luiz André Barroso sheds light on creating energy efficient data centers and on Google’s operations. Anil Gupta over at the Network Storage blog turned me on to this – thanks Anil – a fact I spaced on while writing this.

Power to the servers
In the usual massive-scale Google style, the paper looks at the power consumption of groups of up to 15,000 servers. That’s a tiny fraction of the server population of a recent Google data center, but large enough to be useful for the rest of us with less exalted infrastructures.

A datacenter costs $10-$20 per deployed watt of peak computer power, excluding cooling and other loads. Ideally you’d build and operate the data center at 99% level all the time. The problem is that equipment power ratings are pretty useless for determining peak load.

This leads to a couple of problems. The cost of the datacenter actually exceeds the cost of power for 10 years of operation. I ran the numbers for the power costs at the new Oregon facility and it works out to about 50 cents per watt/year, or $5 for 10 years. So it maximizes Google’s investment to keep their power consumption pegged.

Not only that, but Google gets charged based on their peak watt/hour power consumption – at least in Oregon. If they do one hour at 100 MW and the rest of the month at 25 MW, they get charged for consuming a 100 MW for a month. That’s a little different than us home users.

Therefore it makes sense for them to utilize the data center’s capacity as fully as possible, and as uniformly as possible, so they don’t overbuild the datacenter or overpay for the power. They really need to understand power consumption.

So what did they figure out?
Well, a whole heck of a lot. Here’s some key findings.

  • The gap between aggregate and spec power can be as great as 40% for a datacenter, though Google’s applications are better behaved
  • Dynamic power management is most useful for preventing overloads
  • Power management is more effective at the datacenter level than at the rack level

Stop back soon for part 2 of Powering a warehouse-sized computer