Tintri responds on SSD arrays

by Robin Harris on Tuesday, 20 March, 2012

StorageMojo offered its soapbox to any vendors willing to weigh in on the question of whether enterprise arrays should be built from flash SSDs or not. Ed Lee, architect at Tintri, formerly of Data Domain and a Berkeley Ph.D, elected to respond. It is a long piece but rich in insight.

Tintri produces hybrid disk/flash SSD appliances optimized for virtual environments, not Symm-killers. They use SSDs in their products, as do other folks like Nimble Storage.

No money changed hands between Tintri and StorageMojo or related entities. My accountant is weeping in the next room.

Begin Tintri’s response:

Outside the SSD Box: More than Faster Disk
Robin Harris of Storage Mojo in his recent article, “Are SSD-based arrays a bad idea? and Matt Kixmoeller of Pure in his response, The SSD is Key to Economic Flash Arrays, present interesting perspectives on whether or not SSDs are the best technology for building flash-based arrays. Robin argues that by rethinking how flash can be packaged outside the SSD box, you can achieve better performance, reliability, cost and flexibility. And these observations are supported by the experience of existing flash-based storage vendors who have developed their own custom flash modules and packaging. Matt argues that SSDs provide an industry-standard product that requires less investment to leverage, better economies of scale, and rapid improvement in technology. These are also very valid points, especially for startups with limited time and capital.

Latency
Taking latency as a point for comparison, flash-based storage vendors using custom packaging often quote IO latencies in the tens of microseconds versus SSD latencies of low hundreds of microseconds. While this is a notable difference, software and interfaces can also add overhead and the final latency seen at the subsystem level may differ by only a factor of two to four. Server-side flash products can avoid more of the software and interface overhead and provide better latencies – but may require rewriting applications to capitalize on this advantage. Keep in mind that hard disk latencies can easily reach tens of milliseconds under even moderate load. ALL of these flash-based products have latencies that are hundreds of times faster than disk.

In short, most of the performance improvement comes from simply replacing hard disk with some form of flash. This immediately shifts the performance bottleneck from storage to some other component in your system. As a result, you won’t be able to take full advantage of flash performance without also optimizing the performance of the rest of your infrastructure, and ultimately rewriting your applications as well.

The above phenomenon explains why replacing your hard disk with flash often speeds up your applications by only a factor of two to three rather than ten or a hundred. Congratulations! You’ve just moved the bottleneck from storage to some other component of your system. By Amdahl’s Law, further improving only storage performance has diminishing returns. So while custom packaging does provide significant advantages in latency, most applications are unlikely to benefit until the rest of the computing ecosystem is optimized to take full advantage of flash.

To take a closer look at SSD latencies, I ran the following simple experiment:
1) Erase an MLC SSD so that no logical blocks were actually mapped to flash, and then issue small random reads.
2) Overwrite the entire SSD so that all logical blocks are mapped, and issue the same small random reads in step 1.

The idea here is to measure the software and protocol overheads of accessing flash packaged as SSD separately from accessing the data on the SSD. Reads with no blocks mapped had latencies of around 70us, while the reads with all blocks mapped had latencies of 250us. In this case only a fraction of the overall IO latency was due to SW and protocol overhead, indicating that SSDs may still have significant room for improving latency.

Form factor
Another important issue discussed by both Robin and Matt is the relative cost of flash packaged in SSD versus non-SSD form factors. Robin argues that an SSD costs significantly more $/GB than the underlying flash while Matt argues that non-SSD packaging is expensive to develop, and SSDs provide useful flash management functions as well as hot-swap capability. It’s certainly true that developing custom packaging has a high up front cost, although this is likely balanced by lower unit costs. But as Robin points out, there are also standard packaging options available for non-SSD form factor flash, which may make custom packaging for non-SSD flash unnecessary.

A very important point to keep in mind when thinking about commercially available SSD vs. non-SSD form factors is that SSDs are designed as a substitute for disk, while non-SSD form factors are often designed as substitutes for memory. This means that SSDs focus primarily on reducing $/GB (its greatest weakness vs. disk), while non-SSDs focus on reducing $/IOPS (its greatest weakness vs. DRAM). This explains why SSD is currently much cheaper on a $/GB basis than PCIe flash, while PCIe flash designed as memory expansion is cheaper on a $/IOPS basis than SSD. This is not to say that you can’t build a non-SSD form factor that has lower $/GB than SSD, just that the primary applications for these non-SSD form factors today is usually not as a replacement for disk.

Whether flash in SSD versus non-SSD form factors is better for use in storage subsystems in the long run primarily depends on the relative volumes of these products, and the feature and price sensitivity of the applications these products serve. At this point the ‘winning’ form-factor seems hard to predict. So as a flash subsystem vendor, it seems desirable to keep your options open and ensure that your technology will work well with a variety of packaging options.

More than just a faster disk
But flash is about more than just performance and packing. Flash enables much more than just a faster, denser replacement for disk. With flash, we can finally remove a key mechanical barrier to scaling not only storage systems, but computing systems in general. Going forward, CPU, network and storage can now all scale with improvements in semiconductor technology. When transistors replaced vacuum tubes, we got more than just compact radios; we got simpler, more powerful computing systems. Similarly, flash is a catalyst that will enable far greater levels of automation and functionality for storage and computing systems than is possible today.

I tend to think of the value of new technology as the product of its simplicity times the functionality it offers. It’s clear why functionality is important, but why is simplicity so important? Technology that is simple to use will be used more often, to solve more problems, in less time. As a result, simplicity has a compounding effect on value:

Value = Simplicity * Functionality

How does one measure simplicity? One way is to list the basic steps it takes to perform a task and how long each step takes. One to three is good, four to six is manageable, and anything resembling a twelve step program will likely require written directions and a significant amount of focus. Note that in assessing the simplicity and functionality of a technology, one must do it in the context of the job that needs to be done. For example, a chainsaw has great features for cutting down trees but not for giving haircuts.

A common problem with many general purpose storage products when applied to applications such as virtualization is that they require executing long lists of steps to get anything done – and most of the features are not directly applicable to virtualization. Paradoxically, many of the features that try to make these products better suited to the application end up making the products more complex – resulting in little improvement in overall value. Kind of like adding too many tools to a Swiss army knife until you have so many that the attachments start to stick and rub against each other.

Flash as a catalyst
Flash eliminates a key mechanical barrier to scaling computing systems and is 400 times faster than disk. To keep things in perspective, the speed of sound is “only” 250 times faster than walking! If I could get to work at supersonic speeds, I would no doubt save a lot of time each year. But would I do no more which such an ability? Similarly, is flash just a faster replacement for disk? Will it make no significant difference in the way storage is managed and used? We obviously don’t think so. Flash will greatly increase the value of storage by improving both the simplicity and functionality of enterprise storage products. But these gains will not come easily or without their own set of problems.

An obvious way flash promotes simplicity is by eliminating performance bottlenecks, but as flash enables more dense storage systems many of those gains will be converted to problems in quality-of-service. A more significant way flash promotes value is by providing a better building block for constructing storage systems: flash promotes simplicity by enabling higher levels of automation and allows the implementation of more powerful functionality.

Flash will fragment the enterprise storage market. The general purpose storage systems of today will be supplanted by new flash-based products that are far simpler and more powerful for the specific application areas that they target. This will amplify the simplicity and power that flash already makes possible, and further accelerate the fragmentation of the storage market. This is precisely what happened in the 1980’s when advances in networking technology caused a shift from centralized computing to networked computing – and in the process fragmented the direct attached storage market into ones based on networked storage technology. Over time, the networked storage markets consolidated into the current general purpose storage market dominated by a few major vendors. And so the cycle is repeating itself.

We are at the start of a new technological shift. A shift that is made possible by flash and one that will disrupt the existing enterprise storage market. Just as transistors enabled new products such as personal computers and smart phones, flash will enable simple, intelligent and fast enterprise storage systems. In turn, this will lead to much higher value for end users, but only if we think outside the storage box and treat flash as more than just a faster, denser disk.

The StorageMojo take
For the record the original post wasn’t looking at hybrid solutions, although it is obvious that SSDs can help legacy designs stay competitive without replacing all disks for a few years. For folks like Tintri and Nimble who want to speed up disk storage to stay affordable SSDs make sense. Why engineer a small part of your system when an off-the-shelf solution will suffice?

But for high end transactional SAN storage I still don’t see how SSDs are the right way to go. But I’m expecting more responses, so stay tuned.

Courteous comments welcome, of course. I’m working on a post that reflects directly on Ed’s comment about SSD latency. You’ll like it.

{ 6 comments… read them below or add one }

nate March 20, 2012 at 5:58 pm

One interesting tidbit I heard from some HP folks last year, was with regards to SSD warranty periods, I think they were specifically referring to how either HP or 3PAR (or perhaps both) treated SSDs before vs how they treat them now.

Before, they used to only warranty SSDs that actually failed, not SSDs that were so worn out from re-writes that they failed. At one point at least even on the server end of things HP would not warranty a SSD for any longer than it would a 7k SATA disk (1 year), while all other drives came standard with 3 year warranty. Looks like at least on their latest Gen8 servers HP has 3 year standard warranty on their SLC and MLC flash drives. 7k SATA disks still stuck at 1 year. I bet HP changed their SSD supplier ..

Nowadays in the eyes of HP at least(may or may not apply to the VSP-based P9500) they are treated like most any other part, and are supported under warranty regardless of how worn out they are.

Also found it interesting that on the P4000 (and I’m sure some other platforms too, I’m not aware of anything in the 3PAR world that is similar), there is specific intelligence around knowing how “worn out” the SSD is and the system can alert you as the SSD goes through it’s life cycle so you aren’t faced with large scale SSD failures at the same time (assuming somewhat level distribution of data and even wear patterns across the system).

http://www.channelregister.co.uk/2012/01/16/tieto_vnx5700/

“Our source said, for what it’s worth: “What basically happened (in my understanding from Twitter rumours) is that Tieto had multiple SSD failures on [its] VNX5700 array Fast Cache, this resulted in data loss.””

Chris McCall March 21, 2012 at 3:51 pm

Ed hits on a great point. Flash is a new technology that enables new capabilities and ultimately customer value. From that perspective there will likely be many implementations that find success in the market place.

Jacob Marley March 25, 2012 at 12:50 am

I wonder if the new generation of flash based (de-duping primary) storage would be cost effective ($$/GB) for an organization who’s data is at most 25% deduplicate-able?

While the idea of reducing latency for data access is appealing, I wonder if most primary data needs to be low-latency accessible?

Databases? sure
Virtualization? sure

File Services? not sure

Ed Lee March 31, 2012 at 4:02 pm

Thank you, Chris. Yes, I think many types of flash products can be successful. That is, flash will fragment the general purpose storage and computing market.

Nigel Poulton April 16, 2012 at 10:36 am

Robin.

Im liking what you’ve been writing on solid-state, its one of my biggest interests these days.

On your point of SSD form factor not being the way to go for high end transactional systems….. I have personally had 3 flash array vendors in my lab. Some SSD based and some proprietary. And believe me, the results of the extensive testing were not what you may expect!

Im a technologist and like custom. I like ASICs and FPGAs etc…. but I have to say that I was staggered that commodity based flash arrays could perform so well.

This is a very interesting space!

Charlie Gautreaux August 25, 2012 at 6:07 pm

Ed/Robin,

Great post. It is nice to see the broad and forward thinking perspective on the disruption that flash will bring to the enterprise storage industry and infrastructure as a whole.

Looking forward to trying Tintri out soon.

Charlie

Leave a Comment

Previous post:

Next post: