For decades customers routinely overconfigured storage arrays to get performance. Customers bought the most costly hard drives – 15k SAS or FC – at huge markups. Then they’d short stroke the already limited capacity of these high cost drives – turning a 900GB drive into a, say, 300GB drive – in order to goose IOPS and throughput even further.
Then, of course, they put those costly and power-hungry drives behind large, heavily cached, dual controllers, whose DRAM was even more expensive and power-hungry. These were the best decisions customers could make at the time, but they distorted the array market in a couple of ways.
First, average enterprise capacity utilization was stuck at around 30% for years, meaning customers were buying 3x the capacity they used, and very expensive capacity at that. Then, the storage industry grew fat on the 60%+ gross margins that these systems commanded.
Then SSDs & cloud took the punch bowl away
Thus the array market was smaller in terms of capacity demand – and revenue – than it appeared, if it could have delivered the performance customers wanted without the cost. But how much bigger?
Two major headwinds are buffeting the array industry: cloud services; and all flash arrays (AFA). While the cloud business is clearly taking revenue away from array vendors, the ready IOPS of SSDs are also crushing the traditional big iron array market in both all flash arrays and hybrid arrays, the latter being about 4x the revenue of AFA.
The back of the envelope, please
To untangle the effects on the industry, I took a 2011 IDC forecast for 2015 – they expected 3.9% CAGR – added another year of 3.9% growth to get us into 2016, and arrived at a (2011 era) WW enterprise storage market forecast of $38.75 billion.
Then I added up IDC’s 2016 actuals – $34.6B – which is 89% of $38.75B. A $4.15B shortfall.
But that’s not all. Of that $34.6B, about $4B is all flash arrays, which we can assume are mostly displacing high-end arrays.
So the total impact on the legacy array market is on the order of $8B, or over 20%. That’s how much the overconfiguration effect was costing customers – and inflating array revenue – and is now costing vendors as their business shrinks.
The StorageMojo take
Yeah, this is a loosy-goosy estimate. It mixes up cloud and SSDs impacts. It leaves out the fact that AFAs cost more per GB than HDD arrays. But all in all, it is a conservative estimate.
Why? Look at Infinidat. They’ve built a modern high-end array, using disk and flash, and it costs about a third of a traditional dual-redundant, big iron array. And it’s triple redundant, all in software, and using commodity hardware – like modern storage.
Infinidat’s strategy – kick ’em when they’re down – is almost as good as their architecture. But without cloud and flash, the Big Iron market would be growing even faster than IDC predicted.
Which is to say that IDC’s 2011 forecast was too conservative. If Big Data didn’t have the cloud and scale out storage to live on, we’d have Not-So-Big Data, but it still would have propelled capacity growth – and the array market would have been even larger than IDC forecast.
Courteous comments welcome, of course. Got a different take? Please share in the comments.