On the 11th StorageMojo asked what workloads are ideal for enterprise arrays on a price/performance basis – and got some good answers. In all cases commenters are expressing their personal opinions and not the official policy of any company they may work for.

Here are excerpts – with some comments.

Robert Clark

Processing Medicare claims (on the mainframe). Processing them is a limited license to print money, and if you’re down for even a short time there are financial penalties. Not irrelevant any time soon, due to the ACA?

Agree! Hoping Americans will ask why we sluice 20% of our premium dollars to insurance company overhead. But whoever does the work will still need big arrays.

Chad Sakac, EMC

. . . NO QUESTION that there will be erosion/cannibalization of “traditional hybrid transactional arrays” (in multiple segments of the markets and both “scale up” and “scale out” architectural variants). . . .

BUT… you’re missing something. It’s not just “inertia”, and people will continue to use enterprise storage arrays for many, many workloads (in fact, this is still a growing TAM – but growing more slowly than in the past, with the new TAMs of AFA, Distributed DAS, and “sea of storage” all growing faster).

What are the drivers? It’s not just “performance” (in those cases AFA tend to win). It’s data services that some of the customers count on. Think of consistency groups where there are thousands (in some cases tens of thousands of CGs). Or really tight replication topologies. Or, non-disruptive operations (not that the new architectures won’t get there, but they take time to mature).

Suffice it to say the era of “I need a persistence layer for low latency, therefore I’ll put it on a VMAX or HDS VSP” is over. If it’s just about performance, they are not the primary choice. It’s just that often… It’s not just about performance.

Agree that Symms have lots of value-add services that have been baked into enteprise data centers. That infrastructure rarely gets ripped out, so the revenue stream is safe for years to come.

What I’m seeing today is more ways to achieve similar functionality outside an array controller. But EMC will be the last man standing in big arrays, much as IBM is in mainframes.

John S of EMC

. . . I would have to disagree with you that all enterprises need is IOPS. I have yet to see a customer needing more than 100K IOPS. Most of the startup flash companies are advertising >1M IOPS hero numbers. Disconnect?

My customers range from financial, manufacturing, distribution, insurance, and even e-commerce and never typically see no more than low to mid 20K IOPS, yet many of them have storage capacity needs that are far outweigh the IO needs. So its a balance between $/IOPS, $/GB, availability, feature set, and supportability.

For this reason, my customers like automated storage tiering with FAST software, which allows them to tier the hot data into the server for high performance and out of the Symmetrix to a lower storage tier residing on lower $/GB storage.

Overshooting customer requirements is often a leading indicator of radical change, once something better comes along. But we’re not there yet.

John Martin, NetApp

Oddly enough I find myself agreeing with Mr Sakac. It’s my personal belief that the IOPS/Capacity tiers will progressively dis-integrate from a hardware perspective until the vast majority of the capacity tier is being handled by companies for whom data management is a core competency.

At that point IOPS performance becomes effectively free with large amounts of commodity based storage class memory (probably PCM not Flash), will be embedded into, or very close to the CPU. The additional costs of this will be offset partially by reductions in the amount of DRAM that would otherwise be required, and partially by the fact that applications will expect this architecture and economies of scale will push the prices down dramatically.

In a world where both the performance tier and the capacity tier become increasinly cheap, we will find new storage economics coming to the fore. I wrote a blog post in Computer World here where I made the case that data management will be the new tiering model.

The main reason for this is that the biggest difference between cloud providers and traditional datacenters is NOT the cost of their hardware, but that for a traditional IT datacenter the largest cost is that of human labour, but for a modern datacenter built on cloud principals it is the lowest cost (almost a rounding error). This means that highly automateable data management (or data services if you prefer) which is the heritage of the enterprise array, will be way more important than hardware costs, which will keep the investments that companies like NetApp and EMC have made in their array technology relevant for the foreseeable future. . . .

Your labor cost observation is correct. But isn’t there a trade off between enabling that labor to be productive and getting rid of them altogether? What if the Nimble Storage model scale up? BTW, nice post, Mr. Martin.

The StorageMojo take
I agree that enterprise arrays have a continuing role in the data center. Their services are baked into many data center workflows.

Commenters agreed that IOPS are not the issue. One notes that current arrays far exceed the performance requirements of most data centers.

Traditionally, performance and availability are the two top buying criteria for enterprises. All the commenters come down heavily on the side of availability as the prime reason for buying big iron storage today.

Which leads StorageMojo back to a post written over seven years ago So Mr. Tucci, Where Are EMC’s Google Application Notes? Architecturally today’s scale out infrastructures can offer more data redundancy – better than RAID 6 along with geographic distribution, like CGs – higher performance and greater hardware resiliency than any enterprise array.

True, these scale-out products do not have the software maturity and a myriad of services but if we recall that many of these services were designed to ensure availability and resilience in the face of multiple failures, then we see how precarious the array’s long-term position is in the data center.

Thanks to all who wrote in for your thoughtful comments.

More comments welcome, as always. And a happy Thanksgiving holiday to all my American readers!