RAM-based SSD’s Are Toast – Yippie ki-yay!

by Robin Harris on Thursday, 19 October, 2006

Flash memory does most of what current RAM-based Solid State Disks (SSD) do, and it does it without requiring battery-backup, big power supplies or noisy fans. Plus it is cheaper. So, I started to wonder, with the advent of Samsung’s 32GB SSD are we seeing the beginning of the end of the always marginal RAM disk niche?

SSDs have one overwhelming advantage: speed. Data I/O rates of many thousands of random IOPS because access times are measured in microseconds (millionths) instead of milliseconds (thousandths). They leave short stroked 15k FC drives in the dust.

Repeat 5 times quickly: RAM NOR NAND
Yep, sounds like something the Coneheads would say. My first encounter with RAM disks was at DEC, where the engineers came up with a clever design that used low-quality binned DRAM and disk-like error correcting codes (ECC) to create a lower-cost, higher-margin SCSI RAM disk. Which sold about as well as most SSDs, which is to say, not very well at all. The problem: even though performance is terrific, the price is staggering on a per GB basis. Take this pricing from the StorageMojo.com Price Lists for a Texas Memory SSD:

RS-320-FC2-64 Texas Memory Systems Hardware
RS 320 64GB 2Gigabit Fibre-Channel solid-state disc w/2 FC2 ports, upgrade-able to 8 FC2 ports and 64GB. Has 3 UPS’s and 3 backup disc drives.

At over $1k per GB these SSDs are strictly for the Gucci alligator-skin pocket protector crowd.

Enter the Dragon
Not all flash is created equal. There are two main types, NOR and NAND. Here is a handy table that scopes out the differences between the two:

Feature NOR NAND
Density Lower Higher
Read Speed High Medium
Write Speed Slow Fast
Erase Speed Slow Fast
Interface Memory Address Disk-like
Chip Multi-level Capacity 256Mb 4Gb

So what is that Multi-level capacity? Glad you asked. Both NOR and NAND are available in Single Level (SL) and Multi-level (ML). SL stores one bit per cell, while ML stores two – and I’m hearing, maybe even four RSN. ML is cheaper for a given capacity, but not that much cheaper: only about 15 – 20% less. The really big difference is that ML is only good for about 10,000 read/write (RW) cycles, which is plenty in a camera, but not so great for a disk drive.

SL though is rated for 100,000 RW cycles, which means that each bit of storage is cheaper than ML on a total lifecycle basis.

100,000 bottles of beer on the wall. . .
So, I know what you are thinking: Robin, how could you ever replace a RAM SSD with a flash SSD – the thing would wear out in a heartbeat. And you’d be almost right.

All flash drives contain wear-leveling algorithms to ensure that all cells get similar usage. So the way to think about flash drive usage is to look at your average I/O size, and figure out how many many times that I/O will fit in that size drive times the number of RW cycles.

99,999 bottles of beer on the wall. . .
Take the new Samsung 32 GB SL flash drive. Even though it is being spec’d for the notebook market, it makes a wonderful server drive because it is so fast. But how long would it last?

Let’s say you want to use it for a log file running 2k I/Os (question: do systems still do 2k I/Os? readers please help). So a 32 GB drive has 16,384,000 2k locations, which multiplied by 100,000 equals 1.64 trillion 2k I/Os. So if your server is updating the log file 500 times per second, which would be a reasonably busy server, you’d be doing 1,800,000 RW cycles per hour. So your 32 GB flash drive would last 910,222 hours or almost 104 years of 24 hour a day operation.

At 1,000 IOPS, then 52 years. 1,000 8k IOPS, then 12 years and change. 10,000 8k IOPS then 14 months. All for, I estimate, based on chip prices for about $1k per drive, or about 1/40th the price of a standard RAM-based SSD. So call me crazy, but I say flash is set to conquer the esoteric world of high-performance SSDs.

As ever, comments welcome. Moderation is turned on to defeat comment spam, but no registration required. And please, someone, check my arithmetic. I ran it several times through a calculator and can hardly believe it myself.

{ 15 comments… read them below or add one }

Miro October 19, 2006 at 9:09 pm

Great article!

When you think 100,000 cycles doesn’t sound much, but your numbers look reasonable. For laptops the average file size is probably larger – think internet browsing (html, jpg, flash files) usually ~ 20-50KB, so the flash will wear a bit faster.

By my obzerversions, the hard drive of the average sales manager’s laptop dies once a year, some time faster than that – so the flash still looks pretty cool even for not so esoteric applications.

At price 1K per drive compared to 2-3K for data recovery services it is pretty good if you ask me.

Add to this the read only dead cells (when it dies flash becomes read only) which allows you to still read/recover the data. The side effect to dead cells is that the drive will get smaller and smaller with more dead cells, but this is easy to get under control. Much easier than sudden dead hard drive.

Also, can you imagine how cool would be to run ZFS on flash drives! 🙂

There is also this company – Atomchip, not sure if you have seen the site or not, but this guys have 1TB quantum storage chips with the size of ~1″x1″.


It looks pretty weird on the pictures, but I really hope it is true :))

Robert Pearson October 20, 2006 at 12:19 am

Thanks, Robin…

I’ve been looking for something like this for a long time.

I think of your writing like the line in the John Wayne/John Ford movie Rio Grande.
“Trooper Harris brought us the word. We came as soon as we could.”

Jay Wang January 10, 2007 at 6:37 pm

I agree with your view point. And I think SSD development is a future job going for. The current problem is that the wear leveling algorithms developed by controller chip makers is not good enough mostly. They might extend CF or SD duty cycle but not that good to utilize all slow updated cells. The result is that we can only reach far from the optimzed duty cycle.
The SSD makers now is using very expensive, powerfull control chips, mostly
ARM based ones, to run wear leveling algorithems more complicated then that of CF cards.
I think whoever get it done a cheap and high performance control chip, can greatly speed up SSD into the consumer market.

Jesse January 21, 2007 at 8:35 pm

The following debinks the post about Quantum Storage by Miro.

The text is below is the last segment of the page.


Not quite up to John Titor standard
Is there such a thing as “non-volatile integrated quantum-optical RAM”? I found a Web site claiming gigantic amounts of storage stored in what looks to be an optical jewel of some sort. Their sites are Atom Chip and Compu-Technics, and they are supposedly at CES 2005.

Their Web sites are very poorly made, using what looks like pictures taken with a webcam or a very cheap digital camera, yet they boast 128MB/square millimeter on a 20 micron slab, accessible at 0.6ns, which is Really Fast, considering light goes about 30 cm in 1ns through air. Can you please look into the validity of these claims and said technology?


Is there such a technology? Uh, no.

The sites in question do, indeed, display numerous imperfections as evidence of their handcrafted nature. I don’t know exactly what kind of joke/art project/product of mental illness they are, but they certainly don’t rise to the status of an actual scam. This incredible breakthrough technology has, unaccountably, failed to attract any attention at all in the more than two years they’ve been promoting it, or in the six years since the mad scientist responsible patented what looks like the process allegedly involved.

This thread, which mentions the simply outstanding video clip you can download here, pretty much sums it up. The discussion is not unlike that one might expect if a bunch of automotive engineers found themselves unexpectedly discussing the feasibility of time-travelling DeLoreans.

Far be it from me, of course, to badmouth a technology so great it’s won what looks exactly like an Academy Award for “The complete innovation activity”. That is, I think, the most hilarious of the various awards the Compu-Technics technologies have received either because they paid for them (there are “award mills”, just like “diploma mills”, that’ve served the quack and kook communities for about as long as there’ve been quacks and kooks; you pays your money, you gets your certificate/medal/trophy), or because they turned up at some obscure trade show or convention and their wild claims were believed.

Robin Harris January 21, 2007 at 9:19 pm


You sound like just the guy to decrypt the Colossal Storage disk drive explanation. Now he got Al Shugart to invest in the company, but as Al noted, he couldn’t understand what they were talking about either.

Perpetual motion machines will always be with us. So will incredible new storage technologies (I hope). Along with V8’s that get 90 MPG just by clipping a magnet on the fuel line. Of course the storage companies don’t want you know about it, they’d destroy their business!

A prophet is without honor in his own country. Then again, so is the nutcase – except when he holds high political office.


Bill Todd March 1, 2007 at 12:45 am

Kind of late to be chiming in here, but …

Using SSD for a log is of course very attractive if it means major reductions in access time. OTOH, for anything but pretty light-weight use a log often has to handle considerable bandwidth as well, either because there’s a hell of a lot going on – e.g., in large TPC-C submissions databases may need to stripe their logs across multiple disks to attain sufficient bandwidth – or because once you’ve got a log, it makes sense to dump all reasonably small updates into it temporarily to make them persistent, deferring final placement until they’re about to be thrown out of the cache (i.e., until they’ve ‘cooled off’ – thus catching multiple quick updates to the same data in the log rather than writing each one back to its normal location with, of course, additional log activity for that).

Thus the answer to your question is, “No, people don’t do 2K log writes these days – not with group commits and all sorts of other data being dumped into the log. In fact, log writes can be several hundred KB in size if a backlog develops.” And a possibly better way to look at a flash disk in terms of replacing a single conventional disk is in terms of bandwidth, say, 50 – 100 MB/sec these days (though I’m not sure current flash disks can handle that yet). At that rate, a 32 GB flash drive wrapped around 100K times would last a year or two.

But hey, you could stripe the log across several to get higher bandwidth or greater longevity – still at a cost far below conventional SSD. Or fall back to a log on a conventional disk if it failed (for that matter, *always* dump the log to conventional disk and just use the flash drive for bridge-persistence until it got there – to even out log bursts and eliminate disk-access latencies). So it’s still a good alternative to conventional SSDs for that purpose.

Unfortunately, many current uses of SSD are not log-structured but as band-aids to shore up otherwise inefficient storage practices – e.g., where a lot of potentially unnecessary disk updates are being performed and workload has risen to the point where this inefficiency can no longer be tolerated. How well flash memory would serve here (wherer bandwidth requirements may be far higher than most logs would ever see) is more questionable.

– bill

grey eminence March 22, 2007 at 10:51 am

Robin Harris said, “You sound like just the guy to decrypt the Colossal Storage disk drive explanation. Now he got Al Shugart to invest in the company, but as Al noted, he couldn’t understand what they were talking about either.”
Couldn’t help but notice Al Shugart on the Board of Directors as Vice-Chairman.

So he must of be a little smarter and wiser then you give him credit for.

Robin Harris March 22, 2007 at 12:07 pm


On the contrary, I was paraphrasing Al’s own comment. His accomplishments speak for themselves.


devsk August 4, 2007 at 7:59 pm

Your calculation is based on the fact that you want to destroy every 2k block on the 32Gb disk. Yeah, it will take 104 years for that. But you don’t have to destroy every sector to see failures. Your application will start to see bad sectors much earlier than that, and smart relocation (wear leveling) will not only delay the operation but will run out of good sectors to relocate to as the drive starts to fill up (because of bad sectors or because of normal use).

So, I don’t buy your lifespan argument…:-)

With the logging kind of application, I will be pleased to see it last 2-3 years.

Stas August 23, 2008 at 4:10 am

Well, first of all RAM SSD may be at least one power cheaper than the ones on the market – look at http://hardwareforall.com/ww/serverRAMdisk.html – 128GB SSD at about only $10 000. And so-called fast flash SSD are fast only because they DO use RAM buffers – their productivity in database applications goes down drastically with massive write requests (really below the best HDDs).

But surely, they will make RAM SDDs a more noche products than earlier – for example huge WEB datacenters that used RAM SSDs will eventually drift to the flash ones

james braselton January 5, 2009 at 6:35 pm


TheNightFly September 18, 2009 at 12:17 pm

Storage devices that wear out on a schedule? How do you know the wear-leveling algorithms aren’t wearing out the flash ram faster than normal? What an obvious rip off. RAM based SSDs may be more expensive but you get what you pay for. Go ahead and buy a flash based SSD. In 14 months you’ll need another one and my RAM SSD will still be running. Another 14 months and you’ll another one again and my RAM SSD will still be running. Then you’ll be asking me if I have any spare RAM based SSDs lying around and I’ll be like “no, but I’ve got some old DDR RAM sticks you can have”. lol

$83,783!!! – that’s government pricing. No wonder this niche is so neglected.

Darth Continent September 23, 2010 at 1:10 pm

Thanks for an interesting article!

I remember one of my first computers was an Apple IIe, and at one point I bought a huuuge 256K RAM card and found some software that enabled me to create a RAM disk.

Given the choice of say cannibalizing a bunch of old RAM and putting it to work as a DRAM SSD, I think I’d be fine with it. At the initial startup of my computer, I’d go make coffee while my OS and data are loaded up from regular hard drives or whatever, then actually do my stuff within RAM. A good UPS would minimize the impact of brownouts, so I’d be fat and sassy until at the end of a day or before installing some updates, I’d just dump everything to conventional drives and start the process anew at reboot.

Now I’m off to see whether it might be of any use to take a chunk of my 8 GB of RAM in Windows 7 and convert a chunk into a software-based SSD. Any input on this would be appreciated!

anon June 22, 2011 at 7:17 pm

I think you have an error in your flash longevity calculations. You assume when you write a 2K block, that 2K of the flash disk is affected. This is not the case. Flash longevity is measured in erase cycles, not writes. The entire erase block size is affected for even a 1 byte write. An erase block size of 256K would reduce your estimated 100 years of life to less than a year– which is closer to what folks have been seeing in the real world with flash based SSDs.

DRAM based devices still seem to make much more sense for very high IOPs applications. Too bad they are (mostly) so outrageously priced.

Robin Harris June 22, 2011 at 11:12 pm

Anon, you point out yet another difference between SSDs: some have sophisticated FTLs that include supercap-backed DRAM so that writes are as close to a full block as possible. Some vendors even use tantalum-based caps for greater reliability. And some flash can handle 2 or 3 page writes without a full block erase/write cycle.

It is these subtle design differences that make it vital that a standard endurance test spec be employed. Users care about the results, not imponderable architectural niceties.


Leave a Comment

Previous post:

Next post: