Steve Denegri, author of The Data Center’s Green Direction Is A Dead End turned me onto an interesting Microsoft blog post. Titled Changing Data Center Behavior Based On Chargeback Metrics the post breaks down data center costs at Microsoft.

The author, Christian Belady, professional engineer and principal power and cooling architect, discovered that over 80% of the data center costs scale with power consumption and less than 10% scale with space. Why, he asked, do data centers charge for space and not for power?

Power use is the cost issue
He reports that since Microsoft started charging for wattage there have been a number of important changes. He writes

From our perspective, our charging model is now more closely aligned with our costs. By getting our customers to consider the power that they use rather than space, then power efficiency becomes their guiding light. This new charging model has already resulted in the following changes:


  • Optimizing the data center design

    • Implement best practices to increase power efficiency.

    • Adopt newer, more power efficient technologies.

    • Optimize code for reduced load on hard disks and processors.

    • Engineer the data center to reduce power consumption.

  • Sizing equipment correctly

    • Drive to eliminate Stranded Compute by:

      • Increase utilization by using virtualization and power management technologies.

      • Selecting servers based on application throughput per watt.

      • Right sizing the number of processor cores and memory chips for the application needs.

    • Drive to eliminate stranded power and cooling—ensure that the total capacity of the data center is used. Another name for this is data center utilization and it means that you better be using all of your power capacity before you build your next data center. Otherwise, why did you have the extra power or cooling capacity in the first place…these are all costs you didn’t need.

Christian goes on to quote James Hamilton, a Microsoft architect, whose study (PowerPoint here) has convinced him that a power saving of nearly 4x is both possible and affordable using only current technology.

The StorageMojo take
That data centers can be almost 4x more power efficient should surprise no one. Power efficiency has never been a criteria so they should be grossly inefficient.

The same mindset that justifies pricing software add-ons at their business value rather than a reasonable profit margin is designing data centers. Plenty of power, plenty of cooling, gold-plated backup power because the costs of being down are so high.

It is easy to see how such an attitude creates such wasteful infrastructures and workloads. The question is: will power costs create a significant incentive for change?

My guess is no. The vast majority of computer users, consumer and small to medium enterprises alike, simply do not see power consumption as a significant buying criteria. This is the downside of the consumerization of IT.

Power efficient IT infrastructures will only come with cost effective stepwise enhancements. Component efficiency enhancements that do not cost extra will be successful. Big engineering programs to build energy-efficient servers will not. If you have fewer than 25 servers, as most IT sites do, power demand is simply not a very large part of your costs.

Companies that sell to server vendors should take the issue seriously. The biggest buyers of servers will care about power, even if they are a minority of the units shipped. But significant power savings will require national standards.

Detroit knew that gas prices wouldn’t remain low forever, but in the absence of higher fuel economy standards they went for the easy money. Are server vendors any different?

Comments welcome, of course. Does your datacenter charge per square foot or meter, or by wattage?