Power is probably the least understood/most widely used technology in computing. We don’t understand it, – what is a ground loop?- we rarely measure it, and its behavior is a mystery. Labels are no help either. My computer spec is “100-240 V alternating current, 12 A (low voltage range) or 6 A (high voltage range), 50-60 Hz”.
Does it really use almost as much power as a hair dryer? I don’t think so.
Google power
Beginning 5 years ago, Google took the lead in making a power consumption an IT vendor issue. Today, Intel’s power hungry NetBurst architecture is history and power-efficient multi-core architectures are all the rage. Google wasn’t the only factor, but their use of free software, cheap hardware and massive scale meant that energy consumption became one of the few places they could cut costs.
The fact that they are purchasing over a half million servers a year didn’t hurt either.
Using the data-intensive methodologies their scale enables, Google has now published results of the their studies of data center power.
Googlers take a long hard look at power
What are the key determinants of data center power consumption? How can data centers maximize the return on their big investments? What, if any, technologies may help data centers become more efficient? Google now gives us one large-scale data point.
Power Provisioning for a Warehouse-sized Computer by Xiaobo Fan, Wolf-Dietrich Weber and Luiz André Barroso sheds light on creating energy efficient data centers and on Google’s operations. Anil Gupta over at the Network Storage blog turned me on to this – thanks Anil – a fact I spaced on while writing this.
Power to the servers
In the usual massive-scale Google style, the paper looks at the power consumption of groups of up to 15,000 servers. That’s a tiny fraction of the server population of a recent Google data center, but large enough to be useful for the rest of us with less exalted infrastructures.
A datacenter costs $10-$20 per deployed watt of peak computer power, excluding cooling and other loads. Ideally you’d build and operate the data center at 99% level all the time. The problem is that equipment power ratings are pretty useless for determining peak load.
This leads to a couple of problems. The cost of the datacenter actually exceeds the cost of power for 10 years of operation. I ran the numbers for the power costs at the new Oregon facility and it works out to about 50 cents per watt/year, or $5 for 10 years. So it maximizes Google’s investment to keep their power consumption pegged.
Not only that, but Google gets charged based on their peak watt/hour power consumption – at least in Oregon. If they do one hour at 100 MW and the rest of the month at 25 MW, they get charged for consuming a 100 MW for a month. That’s a little different than us home users.
Therefore it makes sense for them to utilize the data center’s capacity as fully as possible, and as uniformly as possible, so they don’t overbuild the datacenter or overpay for the power. They really need to understand power consumption.
So what did they figure out?
Well, a whole heck of a lot. Here’s some key findings.
- The gap between aggregate and spec power can be as great as 40% for a datacenter, though Google’s applications are better behaved
- Dynamic power management is most useful for preventing overloads
- Power management is more effective at the datacenter level than at the rack level
Stop back soon for part 2 of Powering a warehouse-sized computer
Robin,
Ten days ago, I also covered this very paper (can be downloaded from http://labs.google.com/papers/power_provisioning.pdf) on my blog at http://andirog.blogspot.com/2007/07/power-consumption-of-google-services.html. Barraso previously published a paper in ACM Queue discussing how cost of powering a computing device exceeds the cost of hardware. Google also proposed a new spec for server power supply that is 90% efficient than the current ones with 60-70% efficiency.
Like you wrote before, Google treating its infrastructure as core operation is one of Google’s competitive advantage over its competitors. Google also muddies the “build vs buy” and “focus on your core business, let someone else worry about everything else” debate. Isn’t it.
Anil
I wonder why datacenters are not charged by the KWh like the rest of the world? I wonder if it’s to keep them from overloading the grid with spikes of usage…
Hi Robin,
The study mentioned a ‘power intensive’ benchmarks… does this relate the SPEC benchmarks currently under construction or does it related to another benchmarks?
Any paper or study for power intensive benchmarks ?
Open Systems Guy: I would imagine so. Clearly, the power companies have already worked out protocols so that the demands of steel electric furnaces—used for e.g. scrap in the now wildly successful “mini-mills”, but also used long before them—are handled gracefully by the grid.
And for a plant like that, the cost of provisioning peak power is going to be significant, so charging in terms of it makes a lot of sense, especially since a mini-mill is I would guess going to normally run at near peak power.
Also, mini-mills with continuous rolling systems after the furnace and data-centers (at least the ones that never seem to have quite enough generator backup 🙂 are not going to be the first in line to volunteer for interruptible provisioning; you can get better rates if you allow the power company to have you shed load when their total peak gets too high.
– Harold
Cedric,
It isn’t clear what benchmarks they used, but it seems like these are internal Google tests rather than a standard. Update: The paper mentions in passing that
OSGuy and Harold,
With hydro-power the dams have to let a certain amount of water through to keep the fish and fisherman happy. So if they can sell the power that water produces, it is a win. Otherwise it just rolls out to sea and the Bonneville Power Administration doesn’t get a nickel for it.
Robin