A forecast says PC shipments with disk drives will drop by a third between now and 2017. IBM is pushing the all-flash datacenter. SSD start ups are claiming that flash is really as cheap as disk with much better performance. Is it the beginning of the end for disk drives?
Disk is the new tape and not in a good way
Everywhere you turn pundits – and vendors – are predicting bad things for disk drives. But are they correct?
SSDs cost many times what disks cost. There’s no magic here: the NAND flash chips simply replace the heads and platters of disk drives with much more expensive transistors. Where do these numbers come from?
From several assumptions. In no particular order these include:
- Reduced power and footprint costs that come with the compact and cool running SSDs.
- Reduced management time and effort required to optimize performance.
- In-line compression and de-duplication reduce needed capacity.
- Two-thirds of unused enterprise disk capacity wouldn’t be needed due to the superior performance of flash technology.
Let’s look at each of these in turn.
- The power savings are real. But in the average data center power is a single digit percentage cost. And there are many other consumers of power that make the savings even more marginal.
- Floorspace savings are also real but unless you’re building a new data center or are out of space in your current data center, it is unlikely that you’ll reduce the size of your data center enough to claim much advantage from flash’s high density.
- If your data center is already full, which is not uncommon in densely populated areas, you can save big. In parts of New Jersey co-location space rents for four times that of high-end office space in New York City. There flash can save you some real money and, not coincidentally, make you real money in the financial markets.
- It is also easier to achieve high performance with flash arrays than it is with disk arrays. In fact, system and storage admins often need to unlearn techniques that were developed to optimize disk performance when they start using flash. But unless you’re going to fire your database and storage admins you are unlikely to actually save any money by moving to flash. Those people will move on to other work.
The argument about unused capacity comes from the practice of short stroking expensive 15k Fiber Channel and SAS disks to drive IOPS. But that’s not the only reason for unused capacity. Other reasons include:
- It takes so long to order and standup new storage that everyone is forced to over-configure or risk running out at critical times.
- Enterprise storage is typically purchased on a project basis. When a new application is brought up it often gets its own physical infrastructure. Since the growth rate for the application isn’t understood, the natural inclination is to over-configure.
- Storage and application admins typically have only a hazy idea of performance requirements. They over-configure to be safe: you get more grief for a slow app than you do for spending too much.
These issues are due to the political and cultural context that has developed around application deployment in the enterprise. Moving from disk to flash isn’t a fix.
The least convincing argument in favor of the all-flash data center is that the combination of the duplication and/or compression will shrink the data sufficiently to be competitive with the cost disk. The bottom line is that any technology that can reduce the cost for flash can also reduce it for disk.
Nimble Storage and Tintri – among others – already do this. Their hybrid disk/flash arrays features always-on data compression. It works at wire speed and most users probably have no idea that it even exists.
The StorageMojo take
While someone, somewhere, will undoubtedly invest in an all-flash data center, very few businesses will go in that direction in the next 10 years. The industry is responding to the impact of flash and public cloud storage by rearchitecting storage to incorporate flash where it makes sense, and by radically reducing the cost of high-capacity, high-performance storage that incorporates disks.
The cultural issues that make data center storage expensive aren’t easy but they are fixable. New storage systems that stress commodity hardware and scale out architectures can be looked at as horizontal layers rather than vertical stovepipes.
And the simple truth is that most data is only rarely looked at. Efficient erasure codes and geographical data protection means that backup is more likely to disappear than disks.
Courteous comments welcome, of course. Not that I’m suggesting that backup will disappear any time soon: as long as digital media are flaky we’ll need it.