Robert Gray, a former analyst at IDC and long time friend, asked me to respond to a comment from NetApp president Tom Georgens.
Tom told Information Week that:
. . . virtualization is fundamentally changing the way data centers are designed: instead of individual infrastructure around individual apps, they’re building one single infrastructure that’s independent of the apps. And now they want the same with storage because customers are seeing that virtualization is the way to dramatically improve the flexibility of the business as well as IT costs.
NetApp’s on a tear
NetApp is growing, approaching a $5 billion annual run rate. Mr. Georgens attributes part of that growth to the 50% capacity savings guarantee that NetApp announced two years ago.
That guarantee, which originally applied only to the storage of VMs, appears to have morphed into a more generic talking point: buy NetApp and we’ll give you the same capacity with half the boxes.
Gee, a marketing initiative had an impact on the bottom line? Don’t tell HDS!
The StorageMojo take
It is natural to conflate the impact of virtualization and the growth of single infrastructure thinking. But that is an accident of timing, not cause-and-effect.
The single infrastructure meme grows out of the obvious success of scale-out processing and storage architectures, as exemplified by Amazon and Google. Even without VMware the economic advantages of Internet scale cloud computing and storage would drive the industry.
What virtualization is doing is driving IT architects and CIOs to rethink long-settled – fossilized – infrastructure practices. As they do so it becomes clear that the standard stove-piped enterprise IT model no longer makes sense either economically or operationally.
True, virtual machines are making the desirability of a common storage infrastructure obvious to even the most hidebound IT shop: migrating storage with a VM is tough enough – why have a different infrastructure at the other end? But virtualization or no, the old model of bespoke infrastructure for each application was not economic. Today’s glass houses are factories, not artisanal workshops, and they need industrial scale infrastructure.
This builds on the long-term secular trend of the commoditization of IT. This is why all the array vendors have been busy migrating their custom hardware to software running on x86 commodity boards.
Ironically, NetApp doesn’t actually offer a single scale-out infrastructure, with the exception of recently acquired Bycast. The Spinnaker acquisition – lo, these many years past – was supposed to solve the problem but the integration issues have proven more difficult than NetApp ever imagined.
Enterprises will never reach the scale of a Google or Amazon. However, they will find that scale, even at the enterprise level, has its own dynamics. One implication is a widespread move to object storage.
More on that later.
Courteous comments welcome, of course. Robert, I hadn’t seen Tom’s words. Thanks for bring them to my attention.