Goodbye, old bottleneck
StorageMojo has often asked buyers to focus on latency rather than IOPS thanks to SSDs making IOPS cheap and plentiful. This naturally leads to a focus on I/O stack latency, which multiple vendors are attacking.
But what are the implications of cheap IOPS for enterprise data center operations? That’s what’s motivating the secular trend for simpler storage.
First things
Is data center storage getting simpler? It’s clear that the big iron arrays are sucking wind – EMC’s VMAX/VNX (Information Storage) group saw peak sales in Q4, 2014 – and I forecast that declining trend will accelerate in coming quarters and last for years.
We are also seeing a trend play out in a variety of product categories. The rise of data aware storage from Qumulo and Data Gravity makes it much simpler for less skilled staff to identify storage issues.
The converged and hyper converged platforms roll storage management in with systems management. The advanced remote monitoring and sophisticated but easy to use DR features from Nimble are another example.
And, of course, the simpler object storage interfaces of cloud vendors, who also remove most of the management overhead from corporate IT. The software-only, commodity-based vendors whose products partly compete with cloud, also get it: Scale Computing’s first word on their website is “Simple”.
Why now?
- Cost. The driver. People are ≈70% of enterprise data center cost. Simpler storage = fewer and cheaper people.
- Flash. Tuning HDD-based arrays for performance took a lot of knobs and dials – and people who understood them. Flash makes high performance a given.
- Cloud. Cloud is the vise crushing EDC costs. CFOs who don’t know a switch from server can read AWS prices and put the heat on CIOs.
- Scale. Everyone is handling much larger data stores now, so automation is a necessity.
The StorageMojo take
The IT equivalent of Formula 1 race tuning won’t disappear: some apps will always require the utmost performance. But the huge mass of users will take lower costs over the last possible IOP.
The losers are the systems that make customers pay for features they no longer need. Winners will successfully blend ease of use with performance and availability – at a competitive price.
Is the storage admin an endangered species? Yes. Their numbers wills shrink as the complexity that makes them necessary declines.
Courteous comments welcome, of course.
Interesting blog Robin – Why now? Because we can and because people want it – easy to use is appealing to everyone. In IT, there is a movement towards fewer and less skilled personnel, but even those who can handle a complex system still value one that saves them time and headspace. It’s the concept that Apple has exploited for years – here’s a blog on that http://www.evaluatorgroup.com/apple-eric-slack/
I think the popularity of hyper converged appliances and their movement up from simply a ROBO solution to larger organizations and more critical use cases is a good example.
Storage admin = endangered. Storage architect?
“Step right this way, folks. I have in my hand the cure for what ales you; whatever that may be. And if you order right now we’ll throw in a free set of steak knives. Now how much would you pay?”
The problem with the “simplicity” and “guaranteed” performance of flash these days is folks who know nothing about enterprise storage feel qualified to pick from among available solutions not a few of which have no viable future in the market, be the reasons technical or financial. And so we see companies taking one of two paths: no further need to bother with any kind of storage expert on staff OR ability to reduce number of storage bods on staff and just keep one expert.
If you are one of those senior enterprise storage bods/architects and your employer is one who has never felt the need to document business process models or requirements (You know what we want IT, just do it!) then the time to look elsewhere is upon you.
Part of it is that many high performance systems today are RAM based. For instance, SAP Hana is great if your data is (1) small enough to fit in RAM and (2) you can afford enough RAM. There are other products that are really distributed and solve the (1) problem if not the (2).
When you are dealing with main memory databases, storage is more of an afterthought. Yes you may want to keep a log, and you may need to rebuild the system from time to time, but the performance needs are less acute and also really different from classic “big storage” since the workload is much more sequential.
In the database world, we just install a quarter TB of RAM and call it done – every problem gets an “in-memory” solution these days.