Americans love round numbers. 600HP. 200MPH. $1,000,000,000. 15% capital gains tax rate. 10Gb/s. 6TB drive. 1,000,000 IOPS. $2/GB.

Those are brawny, manly numbers that red-blooded Americans can relate to. Not fussy little decimals, sliding further into irrelevancy with each succeeding digit.

We’re even less fond of ratios. A 600HP car. What’s the power to weight ratio?

Yet, if pushed, we’ll go with ratios. PUE of 1.1– but we won’t get excited about them.

But our love of round numbers can lead us astray. Why is 1,000,000 IOPS the goto number for all-flash arrays?

Few applications require anywhere near 1 million IOPS. Quick, how much bandwidth do 1 million IOPS require? Who cares about IOPS when virtually every AFA has more than enough?

Fair point. You might argue for demand spikes to 5 or 10x of normal might require that magical number. But that’s an argument for over-provisioning and, thanks to AWS, CFOs are daily less interested in funding over-provisioned systems.

But there’s one thing storage systems never get enough of: less.

As in less latency. Lower latency means fewer inflight I/Os, less server I/O overhead and more server capacity. The systems can handle more work. With fewer servers, software licenses, network ports. Less power, cooling, floor space, maintenance.

All we need is a round number to describe latency. Good luck though, because average latency is a trap. Average latency can be low, but if there are long tails – where latency goes from microseconds to many seconds – that’s a problem.

Maybe a diagram would work. Like the eye diagrams used in signal engineering, which measures the effect of channel noise and intersymbol interference on channel performance.

Image courtesy Hardwareonkel

Image courtesy Hardwareonkel

The StorageMojo take
Bottom line: it would help the AFA array market to get away from the pointless IOPS number. It made more sense with disk arrays since disks were the ultimate limiting factor.

More disks, more IOPS. More cache, lower (average) latency.

But flash arrays are are basically all cache, all the time. Yes, there are caches, but more to improve endurance rather than performance. Nothing to be gained by reminding customers of endurance issues.

StorageMojo readers: how best to describe latency? Get as close to one round number as you can!

Courteous comments welcome, of course. Or if IOPS or latency aren’t critical, what is?