And commercial viability
In a ZDnet blog post Cloud vs sand: Google vs Microsoft I discussed the results of a study that TwinStrata did comparing the costs and availability of Google Apps and Microsoft Office/Exchange.
An independent study found on-site Microsoft apps – Office and Exchange – cost 20x in capital dollars and 5x-6x more than Google Apps on a 3 year Total Cost of Ownership (TCO) basis. How can Microsoft compete?
Cloud, as in Google apps, and sand, as in locally hosted Microsoft apps, are battling for business mind share. “Cloud is cheaper” say proponents. “Traditional apps are more reliable” say skeptics.
The rub: both are right. The business problem is finding the most cost-effective path given your needs.
What the ZDnet post didn’t say – too long already – is that TwinStrata did 2 studies: a 20 person firm and a 50 person firm. The 50 person firm study found that the cost of downtime and data loss on the cloud made the TCO of the cloud services much closer to locally hosted services.
In short: outsourcing mail and office apps to the cloud doesn’t work for larger customers – not because the Capex and Opex aren’t lower – but because the costs of downtime and data loss are higher. And today the availability of cloud apps is lower than a well-managed locally hosted infrastructure.
A few implications:
- As cloud infrastructure – mostly network access – availability improves it will become economic for larger organizations.
- This isn’t really about size, but the time value of company information. GMail won’t win the global stock arbitrage business anytime soon.
- Lower value communications will migrate to the cloud. Think FedEx vs USPS.
- Will ISPs/MSPs be the preferred SMB services channel – or will Amazon?
The StorageMojo take
I buy James Hamilton’s numbers that large-scale, purpose-built scale-out systems are significantly cheaper – like 1/6th – than standard enterprise kit. What I see in the TwinStrata numbers is the upper bound on who will use cloud services today.
If you’re managing Exchange servers at Procter & Gamble, your job is safe. At a 25 person architecture firm – not so much.
BTW, I’d love it if a reader would download the TwinStrata software and run some comparative studies on time and data value to see where locally hosted makes sense. The software makes nice charts for you so it should go pretty fast.
Courteous comments welcome, of course. I’ve done work for TwinStrata and am impressed by their Clarity AP software. Learn more about that from a video white paper I did a couple of months ago.
Can you provide a reference to your statement: “James Hamilton’s numbers that large-scale, purpose-built scale-out systems are significantly cheaper – like 1/6th – than standard enterprise kit?”
Noah, good question. I tried to (quickly) find that reference when I did the post and no luck. Still not finding the reference that I remember, but James’ presentation Internet-Scale Service Efficiency (pdf) hits the high points in slide 4. I’m remembering a more detailed blog post or paper, and I can’t find either. Maybe a reader can help me out here.
Seems like IT is bifurcating between clouds and sand. The requirements are different, the deliverables too.
We’ll see a limited group (IBM, Cisco, EMC, etc) in the cloud, with a wider variety of solutions from a wider variety of players in the sand.
From my experience, companies start to lose track of the time and costs that go into their IT infrastructure as they move from a handful of people to tens of people. They may think that their costs are relatively modest but they tend to forget the cost of management in planning IT, minimize the time & cost of outages and other hidden costs. I think that the real determiner of when you need to move from cloud to sand is when the privacy implications start to be a major issue. Otherwise, the level at which you can stay in the cloud should only increase.
I have done a lot of cost comparisons to physical, VM, Utility/Grid, Cloud (hardware, app, and user), and Business continuity / Disaster recovery strategies around them all.
Concerns come to compliancy, tiering resiliency, tiering downtime, storage, replication, and offloading recovery of data and systems.
The further you drill down costs to the systems that can suffer 12+ hours of data restoration, the more effective you make your system cost, and derive the appropriate licensing for the 0 to 1 hr, 1 to 4hr, and up recovery time objectives.
Operationally tiering with DR platforms in place allow for true utilization of cloud apps, subsystems, and save on management.
I spend all day shifting designs between multiple purviews, and honestly the real solution is a solid mix of cost effective technologies.
If anyone would like some baselining and industry trend information, including DR / BC ideas please feel free to contact me.
Mr. Fitch, I would definatley appreciate any information or your input on what you see as the de facto standard for IT DR/BC strategies. Please feel free to email at firstname.lastname@example.org