Steve Jones of BT sent in the following, which I am publishing – with his permission – as a guest post. It has been edited and some headers added so any dodgy parts may not be his fault.

Begin the guest post:
SSD vs storage arrays
I thought it was worth doing a very quick back-of-the-envelope comparison of the Texas Memory Systems RamSan-6200 with the setup that IBM put together for their class-leading TPC-C benchmark. TPC-C is notoriously heavy on I/O and the majority of the costs are in the storage configuration.

TMS claims RamSan-6200 is capable of 5 million IOPs and 60GB/s of throughput with 100TB of RAID-protected SSD storage in a 40U rack. It lists at $4.4m.

In comparison, IBM’s TPC-C benchmark had 11,000 15K drives, all but 8 of which were (on the costed configuration) 73.4GB drives all mapped through 68 storage controllers with write cache. I think that this would occupy upwards of 30 racks and consume more than 230 kW.

The DB data space was almost all configured as RAID0 (some RAID5 on log files). Configured as RAID0 those 11,000 drives would provide about ~800TB of storage. The total 60 day data storage requirement, from the full disclosure report is 172 TB.

You might get 2 million random IOPs on a good day if you weren’t too heavy on the writes, although I/O queuing might be a problem at that density of access. According to the TPC-C costed report, the storage setup listed at $20m.

Comparing the configurations
A quick and rough comparison of two 400TB configs with 74.3GB RAID0 setup vs a RamSan-6200 would give something like the following

RamSan-6200 (theoretical)
4 racks
240 GB/s throughput
20 million random IOPs (not sure about read/write ratios)
Random access time (< 0.2ms with Inifiniband, < 0.4ms on FC; my estimates) Power < 30kW (5TB seems to take 325W). List price - $17.6m Discount available ??? Note, however that the RAMSAM config might be missing a few elements costed in the IBM configuration. IBM 30+ racks? 220 GB/s (68 controllers, each with 8GB of cache and 8 x 4Gbps host optimistically 400MBps per FC). 2 million random IOPs (my estimate with a read-dominated load and RAID0) Random access time 4-5ms read, < 0.5ms write due to write cache (my estimates) before contention Power - 230kW (my estimate - perhaps 180kW on drives, rest on 68 controllers). List Price $20.4m Discount available better than 65% (TPC rules allow for available discounts to be included) Some of those differences are about an order of magnitude or so. This is the IBM TPC-C result.

Power and cooling
I’d suspect the SSD setup is going to save annually perhaps 1.8GWh per year in electricity costs assuming the IBM storage config is using about 230kW and the RamSan-6200 about 30kW. To this needs to be added the A/C costs, so more like 2.5GWh per year plus the A/C maintenance and so on.

I rather think the available discount structures will be less favourable on the SSD setup. However, it is interesting that the SSD setup is already looking more than cost effective against the small enterprise disk model, even before environmental costs are taken into account. If the above is typical, then even 15K 147GB enterprise drives will be killed by this sort of thing very shortly unless the prices are reduced closer to those of commodity drives (which they have room to do of course). The thing that rotating disks can’t address is the random I/O latency issue (which is why many apps are driven to 15K drives).

Where is the SSD TPC-C benchmark?
It’s interesting that nobody seems to have done a full-one SSD TPC-C benchmark, perhaps because the accountants in IBM & HP are unwilling to finance a brand new SSD setup (IBM actually used 37GB 15K drives but costed for 74GB so I suspect even their test lab has to make the kit last). However, it must surely happen one day soon.

Of course getting SSD down to cost per GB figures closer to those of commodity drives isn’t going to happen for a long time, although if the vendors can start to exploit MLC and lower fabrication costs with devices acceptable to the enterprise, it will further push into the rotating disk market.

Getting back to the real world
Of course this particular case is a somewhat insane configuration. There aren’t many organisations that I know that extensively make use of large numbers of small 15K drives to maximise IOP performance.

In our case, the transactional loads are borne by much larger 15K drives (usually now 300GB) carrying mixed workloads spread across all the available disks in an array with the low and high usage spread out. We also have a lot of very large DBs where the average access density is moderate, but latency has to be low for acceptable batch and online usage. Consequently cost per GB is much lower, and transactional latency is bearable although you can pay the penalty as contention goes up. However, many of our transactional type apps are still primarily I/O bound on reads.

End of guest post

The StorageMojo take
Steve’s rough comparison – and readers are encouraged to refine it in the comments – pulls on several data center issues.

  • Economics. SSDs have always been fast, but the $/GB number shut down the conversation before it got started for most folks. But the attention given consumer SSDs and their tradeoffs has reopened the conversation. And thanks to the econoclypse people are much more ready to listen.
  • Performance and capacity. The big difference with the flash-equipped RamSan is the capacity. It is now feasible to go all SSD for a set of major database applications – with the performance of traditional SSDs.
  • Power/cooling/floorspace. The GW/hrs saved sounds significant – especially if you are provisioning a new data center or are running out of power in an existing one.
  • Availability/reliability. TPC-C doesn’t address this directly, but with 11,000 drives there would be almost daily drive failures. The system can recover from all of them, but there has to be some recovery performance hit and, more importantly, there is non-negligible chance of human error in drive replacement. How does that factor in?

Steve, thank you for this guest post.

Courteous comments welcome, of course. TMS advertises on StorageMojo.