VMworld is the best storage show I’ve seen in years. VMware’s severe storage problems leave users hungry for solutions – and your friendly neighborhood storage industry is happy to oblige.
It’s almost as if VMware were owned by a storage company.
Flash everywhere
Fusion-io, Nimble Storage, Nimbus Data, Avere, Pure and more were talking about how well flash supports VMware. Fixes VDI boot storms, deduped VMDKs, I/O bound servers and much more.
Pure Storage
Here is Pure’s Matt Kixmoeller giving a nifty demo in this 50 second video:
Not exactly sure what those thousand VMs were doing. Maybe Pure will comment.
Falconstor
I lost track of Falconstor due to their OEM focus and sprawling product line. New CEO Jim McNiel has refocused the company – with the help of former Cheyenne teammates – on backup, business continuity/DR, dedup and virtualization.
Their clustered Network Storage Server turns all of Fstor’s products into tin-wrapped software suitable for channel partners. Takeaway: forget what you knew about them; they are a new company.
Virsto
While the release of their storage hypervisor for VMware makes them seem like a new company, Virsto has been shipping product for over a year, but on Hyper-V, not VMware. Microsoft lost interest in server virtualization and Virsto moved on.
Their product is a virtual appliance that:
. . . runs in each host, creating a transparent virtual storage layer that is thin provisioned, fully cluster-aware, supports very rapid snapshot and clone creation, and scales to support tens of thousands of high performance snapshots and clones.
Virsto . . . decouple[s] application performance from any dependence on the rotational latencies and seek times of underlying disk associated with random writes. All random writes are sequentialized and written directly to a transparent logging device . . . and then asynchronously de-staged to primary storage. . . .
Net/net: high performance virtual storage regardless of underlying physical storage. Virsto offers a free trial – if you try it, let me know how it works.
But wait! There’s more!
Cloud-related products from StorSimple, AMAX and Raidundant continue to pick at the problem of how/when/where cloud integrates with the enterprise.
The StorageMojo take
Many cool products and ideas. The storage problems of many virtual machines are not unlike those of earlier time-shared virtual memory systems, but the scale is much greater.
And when the scale is greater the problem is fundamentally different. As virtualization grows we’ll need to see more creative answers beyond deduplication and flash.
Courteous comments welcome, of course. Message to SNIA: storage networking is passé. Time to retool for the world of virtual machines, noSQL databases, scale-out storage and flash-enabled architectures. New name would be a start.
Presumably the bulk of the first group are running on some form of Linux or BSD, adding in their software special sauce, and then rolling it into their own commodity box.
Question.. these at least appear to be capable of a software only play. What is the advantage of putting it on your own hardware? Support likely gets easier but then you have supply chain, upstream support/issues etc etc.
Just curious.. seems like a lot of innovation is coming out of companies like the first batch but they are mostly doing the same thing with the one common element not adding much value.
C
Chris, good points. We are very early in understanding a) how flash works; b) how to best leverage flash; c) cloud storage options & long-term economics; and d) how best to leverage cloud, be it public or private.
The tin-wrapped SW question comes up a lot. Startups usually don’t want to do it because of inventory, development and support costs. But customers – especially enterprise customers – often prefer it. Compare Data Domain and Isilon to Diligent and any number of cluster storage software vendors. The downloadable VM is attractive for eval and light usage models, but most data factories want to know that for x input they’ll get y output – and a tested, supported HW config makes that much more doable.
Robin – thanks for including Virsto in your post-show comments. VMworld was a great opportunity for us to launch the industry’s first multi-platform “storage hypervisor”. Virsto’s vision is to support any hypervisor (with integrated VM-centric storage management within the existing management interface – vCenter or System Center), on any heterogeneous block-based storage: delivering high-end storage features like high performance thin-provisioning, with scalable snaps & clones, for a whole lot less than the going rate for high-end arrays. ($2,800 per host).
Virsto has partnered closely with Microsoft over the past 12 months to help extend and enhance storage management on the Hyper-V platform, and actually just received our Windows Server Certification. We’re relying on the same storage hypervisor architecture that powers our Hyper-V solution to support ESX, and plan on continuing to build significant business on both platforms (and likely others) moving forward as the trend for multi-platform hypervisor environments continues to accelerate.
In case anyone is wondering, Gregg is Virsto’s marketing VP.
Hi Robin,
Thanks for posting the video!
In answer to your question, the 1,000 VMs are actually 500 Linux VMs and 500 Windows Server 2008 R2 VMs. The Linux VMs are running a proprietary load generation tool called PureLoad, which simulates a OLTP workload with an 80/20 read/write ratio. The Windows VMs are running IOmeter, again with an 80/20 mix. The VMs vary their load, but are constrained to 120 IOPS each.
You can see much more details on the demo setup in a blog post we did on our site: http://www.purestorage.com/blog/1000-vms-demo-vmworld-2011/
Thanks again, and see you on Friday for Tech Field Day 8 from the Pure Storage offices! http://techfieldday.com/2011/tfd8/
Very interesting range of vendors and approaches.
We have guys like Pure who put all the data on flash and use inline dedupe to help lower the cost per byte. Good concept and as long as the inline dedupe doesn’t cause a slow down performance should be great for ALL your data.
We have others who use much less flash in a COW type scenario to convert random IO’s into sequential IO’s going to spinning disk. Also some benefits, particularly with VM’s where not all your data is hot. Also can justify using even faster-flash (PCI-e attached) as only a small amount is needed, so your peak performance is likely higher.
But in the end the bet comes down to if flash can ever displace spinning rust on a cost per byte perspective – if it can I’d expect to see all these complex approaches go away and we just start stuffing arrays with flash instead. Performance goes up, power usage goes down, reliability should also go up. As for the odds of this happening?
Long ago we were all told LCD displays will NEVER displace CRT’s as the plants are already built and the LCD tech is too expensive. When was the last time you walked into a Best Buy and saw CRT’s in use for TV’s or computer displays? About 3 years ago. Eventually flash will do this to spinning disk is my bet, and we likely only need to wait 4 to 5 years.
Very interesting times.
What happens when the industry settles on NVMe / NVMExpress and starts treating the PCIe bus as a place to stick in NVM storage in the server? Wouldn’t these be very captive performance?
Sanguy – Interesting comments, but really poor analogy. You can’t compare consumers swapping out monitors with conservative IT admins swapping out storage technology. Only once the economics make sense and IT admins have confidence in the technology will it really take off.
I think we will see a wide mix of implementations that all work for a given use case. No doubt Pure’s approach works for the targeted audience. For the SMB audience that we (Drobo) cater to, using a mix of SSDs and spinning disk in a tiering arrangement makes the most sense. The bottom line is getting SSDs integrated in a way that improves price/performance and ROI which all of these implementation do.
These are great efforts by Pure and other flash vendors that are positioning replacement for magnetic disk. Analogy of viewing a entertainment program on LCD vs CRT is not appropriate esp with removing mission critical data from magnetic disks and placing it on flash.
The issue is not so much of getting the technology work, but more about understanding the behavior. The industry has understood behavior of magnetic recording over last 100 years or so, vs a very recent phenomenon of flash which is more of storing electric charge. Fundamentally very different underlying principles. At sub atomic level they might be probably similar or even same, but we have not reached there yet. Our understanding stops at magnetic spin vs electic charge.
While the industry eco-system evolved around the understanding of hysteresis over many decades, we adopted to the behavior and built the products with good control over that behavior. Repeating that with electric charge will take some time to establish the confidence.
Magnetics enjoyed the small amount of data and could live with failures initially with paper records as backup. Unfortunately electric charge does not have that luxury with the information explosion occurring, there is no scope for experimentation. Its going to be a tough ride to prove that flash for all data (esp mission critical)!!, although it may not be impossible.
Chris,
I think its more in terms of margins by packaging the software and hardware together. Also I would think that some of the flash manufacturers have invested in these companies.
Robin,
Totally agree that VMware is so focused on fixing the storage end. Do you have any links/videos which talk/illustrate ‘deduped VMDKs’? Also check out http://www.quadstor.com/vaai-and-data-deduplication.html. I think it’s related.
My take on SSDs is that the most popular systems will be a mix of SSDs and spinning disks for some time with software adapting to use the SSDs as a fast cache. All flash systems is the future but we are at least 4 – 5 years away.
Cheers