A post in the occasional Competing with the Cloud series intended for enterprise IT.
In the last post StorageMojo discussed HP’s POD systems, which have a PUE (Power Use Efficiency) as low as 1.1, competitive with Google and Amazon.
But what about your existing data centers? Can you reduce their PUE?
Yes, you can.
StorageMojo spoke to Chris Yetman, SVP of Operations for Vantage Data Centers, whose Santa Clara data center is LEED platinum certified, about how to achieve an ultra-low PUE.
Why PUE is important
If your IT group is being told to do more with less, join the crowd. Improving PUE is an effective way to do just that. Why?
If your PUE is currently around 2, you’re competing with Google who is at 1.1 today. To put that in perspective, for every megawatt in, Google puts 900KW to work, while you’ll get only 500KW of work done.
Get your PUE down to 1.2 though and you’ll have 833KW to do work with, for only the cost of the improvements. That’s doing more with less.
Key tips
Mr. Yetman has decades of experience in hosting, and Vantage is serious about power efficiency as their LEED platinum cert attests. He is well-versed on the literature and practice of PUE.
Here are his top tips:
Be brave. Many enterprises have a narrow and costly view of proper data center conditions: 65-80F temps; 42-60% humidity; and a dew point up to 58F. But Amazon, Google and others have proven that temps from 59-90F, humidity from 20-80% and a dew point up to 63F – all ASHRAE allowed – is very workable and much more efficient.
Even if you suffer more failures it will cost you much less than lower temperatures. Which brings up the next tip.
Embrace failure. Instead of trying to build a bulletproof infrastructure – a costly and self-defeating effort – challenge IT ops to configure robust systems that survive the inevitable failures. Software people like to boast that software eats hardware. Make them prove it.
Challenge vendors to write better software to handle hardware failures non-disruptively. Then test it.
High voltage power distribution. Every transformer wastes power, so use fewer of them. Deliver 480V to racks, convert once to 12VDC, and be done. Higher voltages have lower current losses and the entire system uses less copper, an expensive metal.
Also, stop buying those big diesel-generator sets UPSs. Put 12V batteries in racks instead: much cheaper and simpler to maintain.
Forget raised floors. They’re expensive and unneccesary.
High efficiency mechanical & electrical equipment. Make efficient trade-offs.
Mechanical – HE fan motors, direct drive – no belts. Don’t lose efficiecy due to gears, belts.
Proper containment. Plug the gaps created to run cables or removed servers in racks. A good hot aisle creates a chimney effect that reduces fan use.
Measurement. Understand transition points – such as filter walls – and measure them. Hot aisle should be lower pressure than cold aisle. Measure outlet temps not inlet temps.
Reduce pressure drops. High pressure variances usually means wasted energy. Measure air pressure on inlet and outlet sides.
The StorageMojo take
Predictions that IT will go 100% cloud are overblown. Besides the problem of legacy apps, there are real advantages to local production and control.
But IT has to be competitive with cloud vendors. PUE isn’t the biggest issue – management cost is – but showing your CFO that you can compete on PUE and other metrics is key to making the case to keep vital functions in-house.
Staying inefficient because “we’ve always done it that way” ensures a short and unhappy career in today’s competitive environment.
Courteous comments welcome, of course. What other tips do you have?
Recent Comments