Some commenters defending traditional storage have stated that flash arrays are not ideal for every workload. I couldn’t agree more.
But that begs the real question: what are the high-cost, big iron arrays like the Symm good for?
Functional obsolescence
If we look at the last 10 years of array development developers and users have been trying to overcome the inherent architectural limitations of enterprise arrays. When a costly enterprise resource is averaging about 30% utilization something is terribly wrong.
Thin provisioning, short stroking, huge caches and – lately – SSD caching are all attempts to deal with the fact that the big iron infrastructure is unable to meet critical enterprise requirements as built.
The enterprise needs I/O, not capacity. Capacity is cheap and I/Os are dear – except on big arrays – where both are costly.
Which gets back to the original question: what workloads are enterprise arrays good for? Because as several flash array vendors have pointed out – Pure Storage among them – with inline compression, dedup and higher average utilization – flash arrays don’t cost much more per gig and offer much higher performance per watt and square foot.
20 years of schooling
One answer might be any application that requires enterprise levels of availability. After all, enterprise storage arrays have been in development for 20 years.
Another might be related more to management: organizations that have trained their SysAdmin staff to use and manage high-end arrays could find it costly to make the conversion.
The latter is simply a way of saying inertia rules the data center. But data centers are factories, not artists studios, and they need regularity and consistency above all else. Data center managers will get dinged more for failing to stay on schedule then they will get kudos for being creative.
The StorageMojo take
Product categories that overshoot market needs are like a cartoon character running off a cliff: everything’s fine until they look down. That moment has arrived for the enterprise array business.
The PC business did the same starting 10 years ago. The rage for netbooks a few years ago was a symptom of consumer desire for smaller, lighter and cheaper systems. The iPad embodied those unmet needs and the PC business has begun a long term decline.
Flash arrays – and other flash-enabled storage – are similarly disruptive. When you look at all the inventive effort to make enterprise arrays not look like the big, clumsy and overpriced dinosaurs they are, it only underscores how ready customers are for something better.
Amazon Web Services benefits from that discontent. Flash arrays do too.
Courteous comments welcome, of course. So tell us: what workloads are enterprise arrays ideal for on a price/performance basis?
Processing Medicare claims (on the mainframe). Processing them is a limited license to print money, and if you’re down for even a short time there are financial penalties. Not irrelevant any time soon, due to the ACA?
Disclosure – EMCer here.
Robin – in my experience, there is NO QUESTION that there will be erosion/cannibalization of “traditional hybrid transactional arrays” (in multiple segments of the markets and both “scale up” and “scale out” architectural variants).
That erosion will come from all-flash arrays to be sure, and also from the newer (and IMO, more disruptive) technologies of distributed DAS models that are still transactional in nature (think VSAN, ScaleIO, etc).
Anyone trying to “stick to their knitting” and protect/defend/coast their mature architectures using inertia alone is going to find themselves in trouble over time. Rest assured, that’s not the state of mind here inside EMC, hence the ongoing R&D and M&A into new places (and continual investment and innovation in the more mature architectures), and continual development on new architectural models around “information persistence”.
BUT… you’re missing something. It’s not just “inertia”, and people will continue to use enterprise storage arrays for many, many workloads (in fact, this is still a growing TAM – but growing more slowly than in the past, with the new TAMs of AFA, Distributed DAS, and “sea of storage” all growing faster).
What are the drivers? It’s not just “performance” (in those cases AFA tend to win). It’s data services that some of the customers count on. Think of consistency groups where there are thousands (in some cases tens of thousands of CGs). Or really tight replication topologies. Or, non-disruptive operations (not that the new architectures won’t get there, but they take time to mature).
Suffice it to say the era of “I need a persistence layer for low latency, therefore I’ll put it on a VMAX or HDS VSP” is over. If it’s just about performance, they are not the primary choice. It’s just that often… It’s not just about performance.
My 2 cents!
…”The latter is simply a way of saying inertia rules the data center</i<. But data centers are factories, not artists studios, and they need regularity and consistency above all else. Data center managers will get dinged more for failing to stay on schedule then they will get kudos for being creative."
Oh how true, … very well stated! Most data centers I've seen don't ever achieve their potential for 'regularity and consistency' due to lack of discipline. Primary reason being the management 'team' lacks discipline to measure staff goals and hold them accountable. If they won't, or can't do that, then they certainly won't risk being creative.
>When a costly enterprise resource is averaging about 30% utilization something is terribly wrong.
Hi Robin,
Long time no “speak.” Can you elaborate on this quite a bit more please?
Oops..typo. I meant “this quote a bit more”
When I started with my current company over 15 years ago PCs were more of a toy in the factory – we have been making x-ray tubes pretty much the same way for at least 30 years. Now with our manufacturing execution and manufacturing integration systems when they are down we do not make or ship product; and at $500M/year every hour of that adds up real fast. However, those environments are new so we have struggled with the mindset change needed to appreciate just how painful unavailability is. So the transition from ‘tier 2’ storage to ‘tier 1’ has been challenging, but IMHO necessary.
Mojo – John from EMC here and my opinions:
No one is arguing the fact that flash is changing the storage industry and its about time, the spinning drive reached its max speed capacity of 15K years ago, while the processor speed has been doubling every year.
My argument for enterprise arrays is that that Flash will not replace the big iron arrays tomorrow and it won’t be for every workload. Many of my customers have a single array in their production data centers and another for DR, with array replication between them. When their core business applications run on this one array, they need and want 6×9’s availability and enterprise feature set and mature data services that Chad mentioned that 20 years of Symmetrix innovations and platform hardening bring. People also buy a Symmetrix for the ecosystem of solutions and software around it, which most or all the flash start-ups and arrays are still largely missing.
I would have to disagree with you that all enterprises need is IOPS. I have yet to see a customer needing more than 100K IOPS. Most of the startup flash companies are advertising >1M IOPS hero numbers. Disconnect?My customers range from financial, manufacturing, distribution, insurance, and even e-commerce and never typically see no more than low to mid 20K IOPS, yet many of them have storage capacity needs that are far outweigh the IO needs. So its a balance between $/IOPS, $/GB, availability, feature set, and supportability. For this reason, my customers like automated storage tiering with FAST software, which allows them to tier the hot data into the server for high performance and out of the Symmetrix to a lower storage tier residing on lower $/GB storage.
-Disclosure NetApp Employee – opinions are my own
Oddly enough I find myself agreeing with Mr Sakac. I’ts my personal belief that the IOPS/Capacity tiers will progressively dis-integrate from a hardware perspective until the vast majority of the capacity tier is being handled by companies for whom data management is a core competency.
At that point IOPS performance becomes effectively free with large amounts of commodity based storage class memory (probably PCM not Flash), will be embedded into, or very close to the CPU. The additional costs of this will be offset partially by reductions in the amount of DRAM that would otherwise be required, and partially by the fact that applications will expect this architecture and economies of scale will push the prices down dramatically.
In a world where both the performance tier and the capacity tier become increasinly cheap, we will find new storage economics coming to the fore. I wrote a blog post in computer world here – http://blogs.computerworld.com/data-storage/23074/tiered-storage-obsolete-yes-and-no-cfbdcw – where I mad the case that data management will be the new tiering model.
The main reason for this is that the biggest difference between cloud providers and traditional datacenters is NOT the cost of their hardware, but that for a traditional IT datacenter the largest cost is that of human labour, but for a modern datacenter built on cloud principals it is the lowest cost (almost a rounding error). This means that highly automateable data management (or data services if you prefer) which is the heritage of the enterprise array, will be way more important than hardware costs, which will keep the investments that companies like NetApp and EMC have made in their array technology relevant for the foreseeable future.
How those services are packaged and consumed is a different and very interesting question though, don’t you think ?
Regards
John Martin
Server side read / write, globally coherent, and protected caching will make all flash arrays obsolete or at least duck tape approaches to older storage installs.
This approach is the fastest performing one and also solves the IO blending seen on centralized storage kit from the storage 1.0 vendors.
–the dude abides