The assumption that underlies much of the interest in cloud computing is that there are economies of scale. If there are not, the extra costs of bandwidth and latency will make cloud computing too costly.
Ever since Google demonstrated that massive infrastructures could be built from commodity hardware and open source software system architects have sought similar advantages at lesser scale. People tend to ignore the fact that Google’s infrastructure is optimized for a few very specific applications.
The Google filesystem and the Google storage system, BigTable, are designed to handle the massive amounts of data that Google acquires and searches every day. Each Google rack only contains 120 disk drives, which is low density compared to most commodity servers.
Google has shown us a way to build massive infrastructures, but not the way. They have built a warehouse sized search appliance.
What makes storage cheaper?
Here is a list:
- Commodity drives. Cheap drives make for a cheap storage.
- Wide fan out. Amortizing interconnect costs across more drives will further lower costs. Performance may suffer depending on workload.
- Free software. Linux, openSolaris, Hadoop and other products are among the candidates.
- Low cost networking. Unmanaged Ethernet switches.
- Self management. When the rest of the infrastructure is either cheap or free people costs will rapidly become the dominant factor.
- Low entry cost. Cloud storage has a definite advantage. Faster setup and lower capital costs are tangible benefits.
Other than fan out none of these factors are very sensitive to scale. Of course there are other issues: network costs; data center costs; and power costs.
Where are the economies?
But once you get above a dozen of so racks what other economies of storage scale are there? I’m asking the question so feel free to provide answers.
The StorageMojo take
People may be the most important economy of scale in storage. If one infrastructure requires 1 admin for 100 TB and another only 1 for 500 TB it is obvious who will win, at least in the United States.
This suggests that cloud storage will need unique services to win. Online backup is an example of a service where users are buying more than capacity.
Then the problem becomes, at least for consumer services, designing offers that are attractive enough to get consumers to sign up and are profitable for the provider. And that means a marriage of marketing, finance and technology. Competing purely on price will be a fool’s game.
Courteous comments welcome, of course.
Robin, Economies of scale for services come from swift and effective resource management. Cloud computing services won’t be economically viable if the resources in cloud data centers aren’t flexibly shared across multiple customers/clients. Cloud providers need technologies that let them implement and shift resources very quickly because cloud customers want to scale capacity and change service levels on demand. They expect cloud service providers to respond much faster than they can themselves.
Commercial cloud customers also have serious uptime demands and large scale DR is also going to be part of the cloud solution. This is an area where economies of scale are very hard to come by. However, efficiency-optimized technologies like Thin Provisioning, de-dupe and WAN acceleration all provide critically important economies of scale for DR. Cheap storage doesn’t provide much in the way of economies of scale. Someday maybe, but not today.
It’s not just commodity drives, the use of commodity hardware rather than proprietary array or NAS hardware can also be an important factor
One possible savings is, as Marc says, being able to have higher utilization, since demand is shared across many customers (and assuming customers don’t all need max capacity at the same time).
Overall, I think the key push comes back to savings in IT people’s time. One key Google advantage is they are able to run their server farms with far fewer admins than anybody else. Same goes non-storage cloud apps – a lot of their allure is: 1)Access anywhere and 2)Very low administrative costs (compared to keeping a bunch of Windows PC’s happily running up to date).
Overall, however, I’m a bit of a cloud skeptic – there are a lot of potential problems the hypesters haven’t thought through.
Robin, you said “This suggests that cloud storage will need unique services to win. Online backup is an example of a service where users are buying more than capacity”
I agree. We are currently testing Wuala at Storage Monkeys to determine just what the value is. Wuala is a new peer-to-peer online storage that allows you to share and use the storage you make available online. When you boil it down, Wuala is just capacity with a clever way of aggregating storage resources. It is clearly consumer-grade storage for now. In my opinion, cloud storage is going to have a tough time making it in the enterprise which is why I am very curious to hear what EMC’s plans are with Maui.
-James Orlean
http://www.StorageMonkeys.com/
Marc, taking that model to its logical conclusion, cloud storage providers will end up supporting the most difficult clients – the hamsters who are always hopping around – while the elephants leave for their own data centers. That may be a good business model. But once the tools are available for Internet data centers, how long will it take for them to trickle down to regular ISPs and enterprise data centers?
Nik, agree, comodity HW in general cuts capex.
Tony, does the technology exist that auto-magically adds capacity for people who need it and removes it from people who aren’t using it? The portfolio effect argument is appealing, but it depends on the granularity and reaction time of the infrastructure to shifts in demand.
James, sounds like a variety of the “edge-centric network.” I like the concept, but how do you get end users to offer up their unused capacity?
Robin
Don’t forget power. It’s hard to get that 1.1 PUE if you don’t have your own warehouse.
Robin, I don’t follow your logical conclusion. Can you elaborate?
Can you get to 1.1PUE? I’ve never heard of anyone doing that.
Anyway you can negotiate pretty good prices from storage vendors if you’re buying a PB every other month is one answer. Homogeneity is another.
A large, homogeneous storage operation requires quite a lot less in terms of labor. We have a site that adds one PB every other month to its ops, and it only has ONE on site storage engineer. Part of the key to that is that is the software that manages it all, of course.
C//
You’re posing the question in a bad way. In asking about “economies of scale,” you’re assuming a cost curve where larger scale decreases costs at all scales. This is the kind of cost curve that leads to natural monopolies, and is fairly rare. Much more typical is a cost curve where there is an minimum competitive scale, above which costs flatten or may even increase. That’s clearly the case for storage, though exactly where that size is may be hard to pin down. One sysadmin can effectively maintain some amount of storage, but if you need less, it’s hard to hire a fractional sysadmin. Properly prepared space doesn’t scale down at the same cost either.
In addition, a big selling point is the ability to grow capacity. A small operator faces hard-to-predict growth, and often has to grow in painful increments on short notice (which is expensive). A large aggregator smooths out variation, so can plan and grow. Finally, there’s the whole tradeoff between capital and operating expense.
Put these together and, yes, it’s hard to see an advantage for the top end guys, who can afford to build at a scale comparable to cloud providers. But for smaller guys below that level, and especially for new guys who are growing rapidly (or hope to), there’s an advantage. (There’s a particular advantage, perhaps, to an in-the-cloud service that provides 100% compatibility with an in-the-datacenter box. Then when I’m small, I can go to the cloud; transition cleanly to my own hardware when it makes sense; and perhaps even use the cloud later if I’ve planned badly and need to tide myself over while expanding in-house.)
As for special services in the cloud: Absolutely, though you have to answer the same questions about scaling. But there’s a fairly obvious such service that’s already out there, namely computing in the cloud: Amazon’s EC2 tied to their S3. Good connectivity between computational and storage resources is obviously important for many classes of application. Any other special services will have to run in “EC2”. But, even without those special services, computational resources currently have the property that I need to pay for my peak usage, even if my average is way below that. Over the years, there have been a number of attempts to attack this – remember IBM’s “speed dial” on leased machines? But large shared computational resources are potentially the first really effective answer. There’s some report around that a typical Google query involves 1000 CPU’s. For Google, that’s part of average usage; as we figure out how the Google style of computing can be applied to other kinds of problems, most users will probably find they need 1,000 CPU’s – but for only a hour a day, perhaps. Putting that kind of thing out on an EC2-like service may prove very attractive. (BTW, in principle, the same argument might apply to storage – but at least right now, all the big storage applications I know of keep stuff around indefinitely. Usage isn’t spiky – it just grows.)
Put another way, storage infrastructures are commodotizing. That means the winning strategy is to focus on how to organize your storage to create more value, given cost is not much of a differentiator.
Availablity? Flexibility? Security? Customer ease-of-use?
Since we’ve drifted from “Are there economies of scale …?” to “Are economies of scale required …?”
Consider three scenarios from the buyer’s perspective:
1 – I need more storage, but can’t get the CAPEX+OPEX approval to buy and provision the next big chunk of storage that would carry us through 1-3 years of projected requirements.
So I store all new data in the cloud using incremental OPEX. Some users complain that off-site storage is too slow.
Eventually, my OPEX and user complaints climb to the point that I can get CAPEX+OPEX approval to buy and provision the next big chunk of storage. Lather, rinse, repeat.
In this scenario, the incremental price I’m willing to pay for another terabyte in the cloud will be much higher than my marginal purchase/provision cost. I need that extra terabyte *now*, not when I can get multiple approvals for data center expansion, server purchase, sys admin hires, …
2 – We want to acquire a 300 TB data set that will ultimately be reduced to a few TB. Once we’re done, we don’t need the original 300 TB data set any more. Cloud storage and computing would be ideal for this project. I avoid large investments in server capacity, data center buildout, etc. I’m happy to pay much more than my marginal purchase/provision cost for this short-term project.
3 – My formerly very obscure web site gets a sudden spike in interest (see Indian Ocean Tsunami, see “slashdotted”). I move my most popular content to a quickly provisioned CDN/S3 site, and my users are happy. I deprovision quickly after the spike subsides, and my web site sinks back into well-deserved obscurity. I avoid large investments in rarely used network bandwidth, server capacity, etc. I’m happy to pay much more than my marginal purchase/provision cost for this temporary service. Here’s a good paper on using the cloud (“utility computing”) for just this scenario: http://research.microsoft.com/~howell/papers/flashcrowds-camera-ready.pdf
Seems like you could build a pretty good business model around these scenarios (and others), even if your marginal cost is 10%-20% higher than mine.
For Wes Felter: I’m quite confident PUE in this Microsoft experiment was below 1.1: http://blogs.msdn.com/the_power_of_software/archive/2008/09/19/intense-computing-or-in-tents-computing.aspx