The cost of flash versus even 15k FC drives has made it common practice to compare “usable gigabytes” to disk array capacity. Pure Storageand, lately, HP, have invoked the idea that, with proper techniques, usable flash capacity can be competitive with the per-gigabyte cost of disk arrays.

I wrote about this on ZDNet and Martin Glassborow of StorageBod.com tweeted that he’d taken a more pessimistic view in a post and concluded

The problem is that many of us don’t have time to carry out proper engineering tests; so I find it best to be as pessimistic as possible…I’d rather be pleasantly surprised than have an horrible shock.

I’m with Martin. Pleasant surprises beat horrible ones every day.

Contingency
While I’d noted the contingent nature of these techniques – compression only works if your data is compressible, for instance – I’d concluded that, in the main, what flash vendors are asserting is legitimate. Yes, there’s a risk – a black swan – that your data’s entropy drops to zero rises to 100%, making it incompressible; that every thin-provisioned app needs its full allotment; that massive updates force snapshots to copy everything while writing and, incidentally, rendering deduplication useless.

In other words, welcome to storage hell.

How much pessimism can you afford?
Yet that view at bottom says we should continue to overconfigure storage to cover any eventuality. Which is nice if you can afford it.

Enterprise disk arrays typically use 30-40 percent of their expensive capacity. While disk systems could use some of these techniques — and one wonders why they haven’t – which has left a clear field for flash vendors.

The point flash vendors are making is that given what flash costs AND its enormous benefits, we should to relax our paranoia a couple of notches, use these techniques, and achieve cost parity with traditional, expensively overconfigured, high-end arrays. Think eventual consistency, which enables much of the goodness of the cloud, while introducing some new problems to manage.

The end of theory and the beginning of wisdom
The techniques flash vendors employ include some old standbys as well as more modern technologies.

  • Compression.
  • De-duplication.
  • Advanced erasure codes.
  • Thin provisioning.
  • Snapshots.

All make assumptions about data and/or usage that may not always apply. For example, LZW assumes that data is compressible — i.e., approximately 50 percent entropy — but if you give it already compressed data, it’s stuck and your nominally “available” capacity suddenly drops. People see this problem with tape, but tape survives.

De-duplication keeps one copy of your data, plus a list of pointers and changes. If that list is corrupted, so is your data, maybe lots of data. So those data structures need to be bulletproof. Yet this too seems manageable.

Thin provisioning assumes that all apps aren’t going to want all their provisioned capacity all at once. A pretty safe bet, but a bet nonetheless.

The StorageMojo questions for readers is:

Have any of these techniques bitten you?
What happened?
What did you do?
Under what circumstances where you assume you only have raw capacity available?

Please be as specific as time and memory allow.

The StorageMojo take
The goal here is to replace the “there be dragons” fear of the unknown with some guideposts. Likelihood, warning signs, preventive action.

Storage people are innately conservative – it’s what we do – but if we can triple the effective capacity of our data processing by using flash at the cost of the loss of 0.00001% of uptime, don’t we have an obligation to accept the risk?

Or should we ignore vendor positioning and insist on enough raw capacity to handle every contingency and damn the cost?

Courteous comments welcome, of course. What say you?