Sizing the overconfig effect on the array market

by Robin Harris on Thursday, 30 March, 2017

For decades customers routinely overconfigured storage arrays to get performance. Customers bought the most costly hard drives – 15k SAS or FC – at huge markups. Then they’d short stroke the already limited capacity of these high cost drives – turning a 900GB drive into a, say, 300GB drive – in order to goose IOPS and throughput even further.

Then, of course, they put those costly and power-hungry drives behind large, heavily cached, dual controllers, whose DRAM was even more expensive and power-hungry. These were the best decisions customers could make at the time, but they distorted the array market in a couple of ways.

First, average enterprise capacity utilization was stuck at around 30% for years, meaning customers were buying 3x the capacity they used, and very expensive capacity at that. Then, the storage industry grew fat on the 60%+ gross margins that these systems commanded.

Then SSDs & cloud took the punch bowl away
Thus the array market was smaller in terms of capacity demand – and revenue – than it appeared, if it could have delivered the performance customers wanted without the cost. But how much bigger?

Two major headwinds are buffeting the array industry: cloud services; and all flash arrays (AFA). While the cloud business is clearly taking revenue away from array vendors, the ready IOPS of SSDs are also crushing the traditional big iron array market in both all flash arrays and hybrid arrays, the latter being about 4x the revenue of AFA.

The back of the envelope, please
To untangle the effects on the industry, I took a 2011 IDC forecast for 2015 – they expected 3.9% CAGR – added another year of 3.9% growth to get us into 2016, and arrived at a (2011 era) WW enterprise storage market forecast of $38.75 billion.

Then I added up IDC’s 2016 actuals – $34.6B – which is 89% of $38.75B. A $4.15B shortfall.

But that’s not all. Of that $34.6B, about $4B is all flash arrays, which we can assume are mostly displacing high-end arrays.

So the total impact on the legacy array market is on the order of $8B, or over 20%. That’s how much the overconfiguration effect was costing customers – and inflating array revenue – and is now costing vendors as their business shrinks.

The StorageMojo take
Yeah, this is a loosy-goosy estimate. It mixes up cloud and SSDs impacts. It leaves out the fact that AFAs cost more per GB than HDD arrays. But all in all, it is a conservative estimate.

Why? Look at Infinidat. They’ve built a modern high-end array, using disk and flash, and it costs about a third of a traditional dual-redundant, big iron array. And it’s triple redundant, all in software, and using commodity hardware – like modern storage.

Infinidat’s strategy – kick ’em when they’re down – is almost as good as their architecture. But without cloud and flash, the Big Iron market would be growing even faster than IDC predicted.

Which is to say that IDC’s 2011 forecast was too conservative. If Big Data didn’t have the cloud and scale out storage to live on, we’d have Not-So-Big Data, but it still would have propelled capacity growth – and the array market would have been even larger than IDC forecast.

Courteous comments welcome, of course. Got a different take? Please share in the comments.

{ 3 comments… read them below or add one }

Petros Koutoupis March 30, 2017 at 1:33 pm

Robin,

Seems like your Infinidat link needs to be fixed. It is pointing back to your website and not the company site.

Anyway, great write up. It sure brings back a ton of memories of how data storage used to be.

Robin Harris March 30, 2017 at 1:49 pm

Petros, thanks for the ping. I had to redo the link using the Wordpress GUI rather than simply putting in the HTML. Odd!

Zivan Ori May 4, 2017 at 7:28 am

Robin,
In their 2011 prediction, IDC couldnt have forecast the AFA market, nor the profilgation of data reduction techniques like dedup and compression that most AFA Gen1 products have focused on. Also they couldnt predict how many workloads would move to the public cloud, or how the cost of components will change.
And still they missed only by 9% over a 5 year prediction: that’s pretty darn impressive.

My personal conclusion is somewhat different: IT departments have a pretty much fixed budget to spend. As products improve and cost of media decreases they keep getting more capacity and more performance for the same dollar. The colder storage moves to the public cloud which explains the 9% deficit but the AFA market will continue to grow and evolve aggressively accounting for more and more of those $34B.

Leave a Comment

Previous post:

Next post: