Steve Denegri, author of The Data Center’s Green Direction Is A Dead End turned me onto an interesting Microsoft blog post. Titled Changing Data Center Behavior Based On Chargeback Metrics the post breaks down data center costs at Microsoft.
The author, Christian Belady, professional engineer and principal power and cooling architect, discovered that over 80% of the data center costs scale with power consumption and less than 10% scale with space. Why, he asked, do data centers charge for space and not for power?
Power use is the cost issue
He reports that since Microsoft started charging for wattage there have been a number of important changes. He writes
From our perspective, our charging model is now more closely aligned with our costs. By getting our customers to consider the power that they use rather than space, then power efficiency becomes their guiding light. This new charging model has already resulted in the following changes:
- Optimizing the data center design
- Implement best practices to increase power efficiency.
- Adopt newer, more power efficient technologies.
- Optimize code for reduced load on hard disks and processors.
- Engineer the data center to reduce power consumption.
Sizing equipment correctly
- Drive to eliminate Stranded Compute by:
- Increase utilization by using virtualization and power management technologies.
- Selecting servers based on application throughput per watt.
- Right sizing the number of processor cores and memory chips for the application needs.
- Drive to eliminate stranded power and cooling—ensure that the total capacity of the data center is used. Another name for this is data center utilization and it means that you better be using all of your power capacity before you build your next data center. Otherwise, why did you have the extra power or cooling capacity in the first place…these are all costs you didn’t need.
Christian goes on to quote James Hamilton, a Microsoft architect, whose study (PowerPoint here) has convinced him that a power saving of nearly 4x is both possible and affordable using only current technology.
The StorageMojo take
That data centers can be almost 4x more power efficient should surprise no one. Power efficiency has never been a criteria so they should be grossly inefficient.
The same mindset that justifies pricing software add-ons at their business value rather than a reasonable profit margin is designing data centers. Plenty of power, plenty of cooling, gold-plated backup power because the costs of being down are so high.
It is easy to see how such an attitude creates such wasteful infrastructures and workloads. The question is: will power costs create a significant incentive for change?
My guess is no. The vast majority of computer users, consumer and small to medium enterprises alike, simply do not see power consumption as a significant buying criteria. This is the downside of the consumerization of IT.
Power efficient IT infrastructures will only come with cost effective stepwise enhancements. Component efficiency enhancements that do not cost extra will be successful. Big engineering programs to build energy-efficient servers will not. If you have fewer than 25 servers, as most IT sites do, power demand is simply not a very large part of your costs.
Companies that sell to server vendors should take the issue seriously. The biggest buyers of servers will care about power, even if they are a minority of the units shipped. But significant power savings will require national standards.
Detroit knew that gas prices wouldn’t remain low forever, but in the absence of higher fuel economy standards they went for the easy money. Are server vendors any different?
Comments welcome, of course. Does your datacenter charge per square foot or meter, or by wattage?
My position is such that I get to see so many vendor roadmaps that sometimes I want to stab my eyes out. Power utilization reductions are on the product roadmaps of every major vendor in computing and storage. One would think that these manufacturers would not make products if they did not think there was demand.
What’s happening at a minimum, in many large companies anyway, is that the folks that want and need servers are finding server room after server room out of either power or cooling budget. This is bringing power and cooling to the eye of engineering, albeit through an indirect route.
The real question should be: with all this “green” going on, will we use less power? I doubt it very much. I think we’ll just squeeze more in there.
Consider virtualization, the IT trend of de jure. Virtual servers spin off like no tomorrow. You fulfill very much more by way of need than you did before, but you spend the same and push that much more in there. To some degree or another, anyway.
I’ve made this very point to out data centre people that when they are talking about space, they are really talking about power. Out data centre people, being very traditional, talk about space in terms of racks. project X needs Y racks. However, a standard rack is not actually defined in our centres by it’s height. They are actually defined in terms of power consumption – at least in terms of averages. So a 4KW server might physically only occupy 25% of a rack, but it could be a full rack at some centres. Of course in placement terms, there are some parts of the data centre which have better airflow, some individual servers exceed the sites rating for one rack so the actual placements may vary. However, for a given server type we have a mapping to the a standardised rack requirement based on power. In effect the space allocations are based on the power rating of the server (using averages – we don’t measure actgual consumption by an individual server). In effect this is charging back for space requirements by power usage.
For large organisations power usage (and cooling costs) are extremely visible. Not just the electricity bill itself, but the horrendous costs of upgrading data centre space. Most data centres I know have no problem with space (although it’s often expressed that way). What they have a problem with is power and cooling – in many cases, at least in the UK, there are practical limits as to the amount of power that the supply companies are willing to provide, at least withought horrendous costs.
So this issue of power consumption is a massive issue in companies with large data centres. As for smaller organisations, then they will gradually learn. All the major IT equipment vendors that I know of have power consumption near the top of their issue list. It may only be clearly visible to operators of large data centres, but it will gradually dawn on all companies that the price of powering other IT equipment (PCs, monitors, local servers, comms equipment etc.) is still there, even if it can’t easily be distinguished from office lighting. It’s easy enough to do the mathematics on this.
There will be legislation though – in the EU there will inevitably be standards on things like standby power consumption for consumer equipment (and cars). However, it’s a bit more difficult to legislate for what the proper power consumption ought to be for a given amount of storage. Frankly the market will eventually drive this – maybe late, but it will. The west is now suffering the consequences of overly cheap power (as it is also suffering the consequences of overly cheap borrowing). There will be a market-driven correction, and it will feed through to engineering priorities on IT equipment.
Dead on, Robin! As users start targeting the 4x savings you referenced, many in the industry wonder if this won’t cause those users to simply deploy more infrastructure to fill that savings gap, offsetting it completely.
Thanks for highlighting this very interesting development on StorageMojo, and I’ll be curious to see posts that shed light on just how many plan to price data center services along power rather than square footage metrics.
So many organizations don’t charge at all because they don’t have the information about inventory, usage and owners. Sloppy accounting and mangement lead to sloppy data center management. Don’t assume bad decisions are being made with good data – assume bad decisions are bad because of a complete lack of useful data.
I for one find it funny that Microsoft is putting out this paper.
Why should small companies care? Their costs are different, because with a small number of servers, the costs simply aren’t the same – spread the servers throughout the office and use air cooling. To give a real world example, three or four servers in a 15000 sq ft building (in three locations) doesn’t add appreciably to the electrical or cooling loads. The air compressor uses a lot more power.
And, yes, I think Robin is correct.
Spot On ! The brains of IT have always focused on space as it is a far easier metric to use and “bill back”. The two key areas where real power savings will come is virtualizing the servers, effectively and moving data.
Storage holds the real key as most data centers already understand the benefits server vitualization. If I utilize real thin provisioning and automated tiered ILM, so that only my active data sits on 15K spindles while the rest of my data sits on 5K SATA, I am talking real money here. We all know that 80% of our data is inactive, but how do I sort it out?
We now have our customers developing a pool of FC disks to handle I/O for the active data, and then large “cans” (1TB SATA) to handle all the inactive data. The software moves data up and down based on usage – and this is done in 2mb blocks.
The savings are incredible – and simple to implement.
I love technology 🙂