The array IP implosion

by Robin Harris on Monday, 23 May, 2016

We’ve seen this movie before
The value of legacy array intellectual property is collapsing. This isn’t complicated: SSDs have made IOPS – what hard drive arrays were optimizing for the last 25 years – easy and cheap.

Think of all the hard-won – well, engineered – optimizations that enabled HDD-based arrays to dominate the storage market for much of the last 20 years.

  • RAID codes. Software disk mirroring – a huge money-maker in the 80s – moved into small, cheap(er) controllers, followed by RAID 5 and, later, RAID 6.
  • Caching. Given the bursty nature of most storage traffic, controller-based caches dramatically improved average performance.
  • Redundancy. RAID managed – but never solved – the problem of disk failure, but drive interface, driver, and array controller redundancy issues – such as cache coherency – required lots of careful problem solving.
  • I/O stack. This wasn’t typically an storage vendor problem, but there was lots of collaboration.

While these techniques remain relevant, the massive “storage operating systems” embedded into legacy storage arrays are now boat anchors, dragging performance down while remaining costly in support and CPU cycles. That’s been the problem plaguing VMAX and VNX ever since EMC embedded the first STEC SSD into them: the architecture allowed only a fraction of the possible SSD performance to be achieved.

Remember the minicomputer?
Joe Tucci does. He was the CEO of Wang Labs after it emerged from the PC holocaust that took down all the minicomputer companies.

The PC started the process of destroying decades worth of intellectual property value held by vertically integrated computer companies. Newcomers, like Dell, could buy CPUs from Intel, an OS from Microsoft, RDBMS from Oracle, disks from Seagate and networking from 3Com and Novell at much lower cost than the DECs and Data Generals could continue to upgrade their own products.

Today, the variety of storage alternatives – cloud, AFA, hybrids, converged, etc. – is destroying the value of legacy array software and architectures. The increasing pace of storage change from new non-volatile memories will accelerate the process.

The StorageMojo take
The destruction of the minicomputer industry by Wintel is not ancient history. Then why have EMC and NetApp been so slow to respond?

To be fair, EMC’s Tucci has been more active than NetApp’s team in taking on cloud and flash. But selling EMC to Dell – after failing to persuade HP to buy it – shows what Tucci thinks of EMC’s chances as a standalone storage company.

Storage systems will remain on the market. There are a number of options available today that are as cheap, or cheaper, than cloud storage – and earning good margins. More on that in another post.

Courteous comments welcome, of course.

{ 2 comments… read them below or add one }

KD Mann May 25, 2016 at 1:32 pm

Great post Robin.

I would offer though that perhaps what’s really dead is the value of array IP on proprietary storage hardware.

Over the last 15 years or so, we’ve seen storage array IP migrate from proprietary RISC based storage hardware platforms on to cheap (typically white-box) x86 server platforms running embedded windows/unix/linux code. EMC’s last non-x86 platform code was developed in the 1990’s. XIV was started by ex-EMC wizard Moshe Yanai and built from the ground up to run on cheap supermicro white-box servers that were ‘proprietized’ and sold at obscene margins

It’s a short step to take any of these x86 code stacks running on thinly disguised white-box Xeon motherboards and now move the ‘array’ controller code into a virtual machine — that can run anywhere. Examples include LeftHand array IP morphing into HP’s “StoreVirtual VSA”, running in a virtual machine adjacent to the workloads (read FAST). As an aside — this is the essence of what is now called ‘hyperconverged’.

A few vendors were (in hindsight anyway) visionaries here. DataCore for example in 1998 built an array controller stack on Windows — then blew the whole world of ‘hardware defined’ arrays out of the water on SPC-1 in 2003. Recently they’ve done it again, posting the fastest response times ever measured on SPC-1 (both in 2003 and again) here in 2015/2016. They used a hybrid Flash-Disk solution with off-the-shelf servers using cheapest available Flash SSD (and SATA SSD’s at that) plus HDDs — that is faster than any all-flash array ever tested — by a factor of 5 or more.

DataCore’s SDS on COTS rig blew away EMC’s VNX All-Flash configuration on every metric, most notably cost/IOPS and response times.

Has the value of array IP totally collapsed? Or does the value of array IP need to be re-assessed in the context of software-defined rather than hardware defined architectures? I for one am looking forward to seeing other SDS vendors demonstrate the value or their IP in audited and peer-reviewed contexts like SPC-1. This will give customers a valid basis for comparison and help judge the worth of array IP in business value terms.


Steve Chalmers May 27, 2016 at 4:25 pm

Perhaps it’s worthwhile to look at this topic not from the storage silo (whether that’s a disk array, a Fibre Channel SAN, a NAS appliance, or a company building each of these), but rather from the viewpoint of the application.

The application can only see a traditional storage system through a traditional storage stack. An SSD inside a storage-silo box, regardless of whose box it is, is limited in latency (and therefore in 1/latency application performance) by a storage stack which the box maker doesn’t control.

So of course it makes sense that when Microsoft decided to pull storage work back into Windows Server (a decade ago), and use RDMA which had been languishing on the shelf for a decade, or VMware did something similar, that customers would find cost if not performance advantage.

Note that these new approaches evaporate the silo wall between what a decade ago we called servers, and what we called storage.

The next chapter is storage class memory in servers, and how applications, database and file system middleware, and storage system software will evolve over time to best use this new hardware technology.

Will be interesting to watch over the next decade or two (remember, storage interfaces evolve glacially), and see how this plays out.


Leave a Comment

Previous post:

Next post: