Americans love round numbers. 600HP. 200MPH. $1,000,000,000. 15% capital gains tax rate. 10Gb/s. 6TB drive. 1,000,000 IOPS. $2/GB.
Those are brawny, manly numbers that red-blooded Americans can relate to. Not fussy little decimals, sliding further into irrelevancy with each succeeding digit.
We’re even less fond of ratios. A 600HP car. What’s the power to weight ratio?
Yet, if pushed, we’ll go with ratios. PUE of 1.1– but we won’t get excited about them.
But our love of round numbers can lead us astray. Why is 1,000,000 IOPS the goto number for all-flash arrays?
Few applications require anywhere near 1 million IOPS. Quick, how much bandwidth do 1 million IOPS require? Who cares about IOPS when virtually every AFA has more than enough?
Fair point. You might argue for demand spikes to 5 or 10x of normal might require that magical number. But that’s an argument for over-provisioning and, thanks to AWS, CFOs are daily less interested in funding over-provisioned systems.
But there’s one thing storage systems never get enough of: less.
As in less latency. Lower latency means fewer inflight I/Os, less server I/O overhead and more server capacity. The systems can handle more work. With fewer servers, software licenses, network ports. Less power, cooling, floor space, maintenance.
All we need is a round number to describe latency. Good luck though, because average latency is a trap. Average latency can be low, but if there are long tails – where latency goes from microseconds to many seconds – that’s a problem.
Maybe a diagram would work. Like the eye diagrams used in signal engineering, which measures the effect of channel noise and intersymbol interference on channel performance.
The StorageMojo take
Bottom line: it would help the AFA array market to get away from the pointless IOPS number. It made more sense with disk arrays since disks were the ultimate limiting factor.
More disks, more IOPS. More cache, lower (average) latency.
But flash arrays are are basically all cache, all the time. Yes, there are caches, but more to improve endurance rather than performance. Nothing to be gained by reminding customers of endurance issues.
StorageMojo readers: how best to describe latency? Get as close to one round number as you can!
Courteous comments welcome, of course. Or if IOPS or latency aren’t critical, what is?
More important than raw latency is its consistency. One way would be to express latency at the 95th percentile, rather than mean or median latency.
You nailed it. Latency is the new currency in storage not IOPs – in particularly useless lab queen results like 1,000,000 IOPs comprised of 4KB IO size and a 100% read workload. That matches what application exactly?
Keep pushing the envelope,
-v
IOPS is like bandwidth, it doesn’t matter until you run out, at which point in time it starts impacting latency, which is the thing people perceive, so to say that IOPS don’t matter at all is mostly true, until it isn’t any more.
As Vaughn pointed out before (Hey there 🙂 ) , 4K reads aren’t really indicative of any useful workload, but the only two reasonably neutral benchmark workloads out there are SPC-1 and arguably Spec-SFS, neither of which are overly representative of the kinds of things we typically see being used for all flash arrays, though things like the STAC-M3 benchmark test whole systems for things like real-time trading and tick-analysis, but are probably good proxies for useful all-flash array performance too.
The other popular workload for All Flash Arrays is VDI, so it might be interesting to use the citrix-vdi-capacity-program, though some all disk and hybrid arrays seem to do really very well in that workload too, so it may not be the best way of showing off the performance of an All Flash Array either.
For the moment, I suppose we’d best stick to SPC-1, though to date, there haven’t been too many All Flash Array startups (and some of the established vendors) submitting results, which makes me wonder … why, are they worried their “1 meeeeeelioon IOPS” numbers are going to looks a little silly when submitted to a benchmark that resembles a real enterprise workload ?
Spot on. No point doing a million IOPs with very high latency. But its worth mentioning that the hybrid array market has been doing this for a while (probably since they cannot claim the high IOP numbers that AFAs quote so better to focus on average latency). Nimble Storage for example have demonstrated sub millisecond latencies for upto half a million IOPs (enough for most people). The two metrics, IOPs and latency go hand in hand, Fazal made a very good point about consistency too!
Completely agree that *consistent* latency is an important measure given that all the AFA’s can handle huge IOPs numbers.
But what about everything else? Many of the AFAs have huge feature/functionality gaps and some have seriously limited RAS compared to the traditional arrays we know and love/hate.
How useful is an AFA with 1million IOPs at <1ms consistent latency for enterprise applications if you have to take it offline to upgrade software? What about replication?
In some cases (to steal your analogy) it's like a 600HP car, but we're missing seat belts, airbags, traction control, etc.
Why confine it to just one round number? I’d suggest using median latency and the gini coefficient. Gives you a good idea of both average performance without the outliers and consistency. After all, sometimes maximum latency matters, and sometimes it doesn’t.
Kurtosis of the latency distribution
Use should use latency and percentiles. Using a > 1 ms at 99, 98, 95 percentile for example would give you a good baseline for expected performance. I use this for both Compute and Storage performance planning. Some applications only need 95% guarantees, and others like BI might need the 99%. You can write a SLA around a latency being delivered at measured percentile rate. And your all correct, we can make a array rated for 75,000 4K IOPS do 154,000 512b IOPS; but the latency was 60+ ms which is worthless.