The CEO of a startup told me yesterday that their data mining software is so efficient that it cuts the processing time of a terabyte of data by 75-80%. Surely, he said hopefully, the energy savings alone would drive customer adoption.
I don’t think so
He got me thinking about the current fad for green computing and how vendors can take advantage of it. Then I saw this on a Wall Street Journal blog, talking about the IBM venture group’s focus on sensor networks and software and its importance for energy conservation:
While companies have been slow to adopt so-called green technology, IBM thinks that will change. “We’re well beyond convinced,” says Clark. “We’re betting a lot of money on it.” IBM will presumably make a lot of that money back helping companies integrate their new sensor software with existing systems.
How this plays out
In the Google power paper (see Powering a warehouse-sized computer) they made a number of statements that are at odds with the “reduced power use = wonderful” message of the current stage of the hype cycle.
- The capital cost of provisioning a single watt of power is more expensive than 10 years of power consumption. That conclusion didn’t seem to use dodgy Net Present Value calculations either, so it understates the impact.
- Data centers are most economically efficient operating at close to 100% of provisioned power.
- The greatest opportunity for power savings comes reducing the power consumption of idle kit, not from making busy kit more efficient.
Why is the Google view important?
Only commodity hardware and OSS-based data centers get a clear view into power issues. They don’t have the massive maintenance contracts, depreciation and system management costs that overshadow power cost in the enterprise.
Some implications for marketers
We’re going to see a lot more dumb comments from marketers attempting to get their company on the green bandwagon. As customers wise up – and plenty of them are already wise to this happy talk – there will be plenty of backtracking and, uh, clarification.
Unless the customer is caught in the “must take out 1 watt out for every watt brought in” trap – which isn’t all that common, the real savings come from accurate provisioning. Vendors can help with that just by providing accurate information about their products so customers aren’t forced to over-provision.
As Google noted, the tendency for everyone to “err on the side of safety” in figuring power requirements is expensive. The equipment does it, the codes add to it and everyone adds their own fudge factor.
Where sensor networks fit
I agree with Mr. Clark that sensor networks and the software to run them will be big. What I don’t see is data centers adding them to save a few dozen kilowatts. How could that possibly be cost-effective?
What I can see is equipment vendors adding an interface to a standard set of parameters, which could be used to slip machines into a currently non-existent deep sleep mode that takes them down 10% of their peak power requirement rather than today’s 50% while also keeping the entire data center at 99% of peak power consumption when busy.
The StorageMojo take
There’s a lot more to finding competitive advantage in the green movement than sticking higher capacity disk drives in your same old array. That is just perfuming the pig.
The big win is showing large greenfield data centers how to increase economic efficiency. The power disty folks have a golden opportunity if they sharpen their tools and get to work. The electrical code people will be a bigger problem. Can data centers get some special treatment? We’ll have to wait and see.
Sure, everyone wants a lower power bill. But over the long term the real win comes from re-architecting data centers with an eye towards total economic efficiency. Some of the work is component level, some is box level, but it is the overall system architectures that will be most affected.
Comments welcome, of course.
It may become cost-effective to be greener if the cost of power goes up. As it stands I guess it is only worth thinking about saving energy if you have to (because you have some limit on the maximum energy you can draw), if the savings outweigh the costs (long term savings before equipment is replaced outweighs costs of buying the powersaving equipment over the non powersaving stuff) or possibly if you are new (you don’t already have the infrastructure which will go idle but will still be paid for).
I have only ever measured consumer kit usage using wall wart devices but one Intel Core 2 Duo machine I looked at would draw 81W at peak, 40W at Idle and 4W when doing suspend to RAM. Looking at an Intel Core Duo 2 laptop I’ve seen it draw up to around 40W peak and 18W idle on AC (although the system alleges that the only 8W is being drawn when its on battery and doing maximum powersaving and is idle). Obviously these numbers don’t mean anything, but I do wonder how a system can get down to the 10% mark you are mentioning without effectively being turned “off”…
I’d love to know what you think of Cassatt (http://www.cassatt.com/). They tackle the same problem with a slightly different solution.
Google is ‘locked-in’ to triplicated storage, with six disks per motherboard-based storage node. They need all the help they can get… but all of this stuff is marginal. Nothing like this can make them ‘green’, regardless what they say.