Herewith continues NAND – an engineer’s perspective.
Any you thought marketing guys were wordy! The quoted bits are from the earlier StorageMojo post Notebook flash SSD market: fantasy or mirage?. Teil eins ist hier.
Begin part zwei
. . . tested application performance hardly changes either . . . .
Actually, this makes sense. If you are accessing 4k of data, then both HDD and SSD are both fast enough and you don’t care. If you are accessing a 1MB file, then that is 256 x 4k sector accesses, and the sectors will be laid out one after the other, which is where HDDs perform well. SSDs will shine when you need to do 256 x 4k sector accesses, and the sectors you are accessing are scattered across the disk, but as far as I know this access pattern is not common except on servers.
And what about the 4-bit MLC that Toshiba is counting on to drive costs down?
I’m a NAND flash fan, but this is scary stuff for me. To store 1 bit in a bit cell, you need to distinguish between two voltage levels. To store 2 bits, you need to distinguish 4 levels. For 3 bits, 8 levels. For 4 bits, 16 levels. I think at the 4 bit/16 level point, we’re down to where 10-20 individual electrons can make the difference in the bits read out.
This will less durable than current SLC. How do you explain that to consumers?
The answer is easy, but doing it is hard. You have to make it so that the issues are completely invisible to consumers.
Note that this has been done successfully with flash for years. Most of the memory cards (SD, MMC, etc) that people have been buying for years use MLC flash.
Flash has read errors – that’s why vendors implement error detection.
NAND chips are generally organized in write pages, with a spare area for each page – typically 2kB page, with 64B of spare area. The spare area is used to store ECC parity data, and meta data (more about this shortly).
HDDs have read errors as well, they also write their data to the platter using ECC, and other algorithms that make it easier to recover the bit clock and align the heads when reading the data back.
But flash has a problem disks don’t: flash drives move your data around a lot more often than disks do. Every time a flash drive writes a page, it has to erase the entire block that page is in.
Not quite right. Generally, a page can only be written once, and has to be erased before it can be written again. And unfortunately, erases can only be done on an erase block, which is usually 64 write pages. If you have to erase a page, then you might have to move 63 other pages to free up the erase block – yuck! It happens sometimes, but the FTL (flash translation layer) software that manages all of this is usually optimized to avoid this situation as much as possible.
The normal scenario is that you write a page, and the FTL just puts the new data in a new page somewhere, and marks the old page as obsolete. Once you the FTL runs low on space, it needs to do garbage collection, but if you put a little extra NAND in your system so that even a full filesystem has some empty pages, you can make that pretty rare.
No hard numbers from the vendors – depends on how good their signal processing algorithms are – but it could easily be 5,000 writes – down from 10,000 today.
Actually, some of the NAND vendors are already at 5k erase/write cycles today. This, and slow write speeds are definitely the weak links for MLC NAND.
I believe that it is possible to do a good enough job with caches in the computer DRAM, and in the FTL to make a system built from 5k endurance work for a very long time.
Note that the 5k number is a statistical thing – this is the number of cycles at which about x% of the blocks will have failed (I think x% = 50%, but I didn’t look it up). This means that some blocks might fail when the part is new, and some might last a lot longer. If the software is done right, then the amount of available storage space will gradually shrink as blocks fail, and the entire drive won’t suddenly fail.
The map that keeps track of where your data is rapidly gets very complex – and itself is regularly read and rewritten. How well protected is this critical data structure? If it isn’t bulletproof you can kiss your data good bye.
All true. But you can also write metadata information in the spare area, to allow you to rebuild the FTL map if something goes horribly wrong.
Also, HDDs have the same problem with their FAT tables, or the modern equivalent. This is normally stored on the disk, and in the computer’s RAM, with the disk copy being a little out of date. Lose power at the wrong moment, and bad things can happen.
The StorageMojo take
Many thanks to the anonymous contributor. Net/net this points again to the suitability of flash drives for servers – and not so much for notebooks – the original subject.
The larger issue is the lack of transparency on the part of NAND SSD vendors. Until their architectures can be independently reviewed, we all have to rely upon marketing assurances – not! – and the useful but skimpy testing provided by sites like Anandtech.
The server-side SSD market can work with those limits. After all, the vendor of the complete system has to stand behind it.
But that is a tiny fraction of the total available market. The big win is on the consumer side: 100+ million units; if the product delivers.
Samsung, Toshiba: your current strategy is doomed. You need to engage at the consumer’s level instead of relying on the usual marketing hype. Your product is too costly, now and 3 years from now, to succeed without delivering real benefits.
You aren’t there yet.
Comments welcome, of course.
Just a quickie from a German speaking person 🙂 It’s called “Part eins ist hier” (the “s” was missing). As “part” does not exist in German either, I would write “Teil Eins ist hier”
After spending the last 4 months in Redmond… I have truly realized and emotionally accepted the fact:
In mathamatics, no matter how many points you have it doesn’t make a line.
The computing world is just realizing that. No matter how many point technologies you have, it doesn’t make a solution.
The fact companies are still trying to sell plain old raid boxes and servers with SDD is a good example that issue here, as you point out. SSD “could” be good device for servers and they “could” be a good device for the pc. Is it good device for cell phones and mp3 players, you bet…
Sometimes we need to ask the right question to get the right answer. Some larger companies can’t, or won’t, make changes to their product lines or open up their technologies for scrutiny. Why, do they lack fortitude, moral fiber, protecting their IP, or maybe management is just riding the wave ’til their parachute unfolds. Sometimes, they are held captive by their stock prices. The again, maybe, just maybe, its fear.
The fear that the truth of data integrity will send their customers running to their competitors. Competitors that are willing to sell ‘their’ customers what they want to hear. And, afraid of that their customers will jump ship, buy something that makes them feel good, irrespective of the hard reality – none of it is better than any other technology solid state memory technology. It doesn’t matter if “the check IS in the mail”, the account is overdrawn.
But are any of these the right question ?? Is this just a transitionary phase every industry suffers ? Does any of this really matter ?
The business of papyrus in ancient times is a good model to draw from. Making paper parchments in ancient times was an expensive process by modern terms. When parchments were finished as a message, they were often recycled to make other parchments. When the parchments wore out or decomposed, there would be professional scribes making copies. All an expensive process by today’s standards. Shifting up a few thousand years, we don’t see many people erasing a piece of paper so it can be re-used. Although, some economics do favor recycling paper. For the most part, paper products used as information media are disposable, it used once and disgarded. How many of us use the backs of sticky notes. It is disposable “use once” media.
This is the model for multibit solid state memory. Soon it will be so low cost, it will be treated is as disposable media. One year ago, I was paying $40 for a 2Gb USB drive. A few days ago I saw a 16GB USB disk for close to $30, In 2 years it may cost less than $10.
When data integrity becomes an issue, I’m sure there will be manufacturers selling raid and tandem versions of the SSD/SSM products. Businesses will be overflowing with opportunies for data chip replacement service organizations. Who says our service oriented economy is limited to printer toner, waiters, cashiers and house keepers ??
Back on track… The next generation decision makers in IT will be growing up with disposable data storage media integrated into everyday products . It will be as natural for them to accept these SSD products into the enterprise and budiness infrastructures as the last generation accepted scsi and fibre channel. Today, consumers accept SSM in their mp3 players and cell phones and don’t know or care if its 5000, 10000 or 1M write cycles. When they need better data integrity, increased capacity or performance, they will purchase more, for more copies of data. The old chips, they sit in the draw, get thown away or sometimes end up on ebay.
Someone will front end SS memory (Hmmm, we never heard of that before) with a big old BBU ram cache. IT departments will have to justify spending $1000/mo on new memory chips. While facilities will spend more than that on flushing the toilets.
cheers,
x
Wow – you don’t even pay attention to the comments that you yourself choose to publish.
What part of “I think that the SSD drive makers can do a MUCH better job than they’ve done so far, and that the raw technology is capable of doing much better. I think eventually the SSD products will get better, and we’ll see SSD drives (or their successors) used almost everywhere” managed to escape you? Or of “The answer is easy, but doing it is hard. You have to make it so that the issues are completely invisible to consumers. Note that this has been done successfully with flash for years”? Or of “I believe that it is possible to do a good enough job with caches in the computer DRAM, and in the FTL to make a system built from 5k endurance work for a very long time”?
In his statement “Now, if we can do something about the power consumption of the display back light and CPU, then SSD vs. HDD might make a difference” he does seem to have ignored my earlier observation that we already *are* doing something in these areas, and he’s simply wrong in blithely asserting that “Despite what a commenter said, spinning the HDD platter doesn’t take a lot of energy” – at least I’m far more inclined to believe the numbers that I found at seagate.com which specified how much power went to which activities than I am the unsupported qualitative opinions of someone whose specialty is in another field.
But in general he seems to have a far clearer understanding of the situation than you do, and you should pay much closer attention to what he’s saying.
– bill
Bill,
StorageMojo isn’t a monologue. I’m really interested in learning from StorageMojo’s very smart audience.
StorageMojo would be a yawn if I banished every comment I disagree with. We’re all big boys and girls. We can entertain opposing ideas without our brains exploding.
That said, we will find that market forces – not engineering elegance – will drive the success or failure of notebook SSDs. Of course SSDs will get “better.” That isn’t the issue.
The issue is “will notebook SSDs get good enough to overcome their cost disadvantage in the broad consumer market?” Despite the desperate bravado of Toshiba the answer is still no.
In 3 years notebook SSDs will have no more than 10% of the market – excluding low-end Eee-type systems where cost eliminates disks – and that 10% will be confined to the high-priced ultra-light notebook market. Check back with me then.
For the record, everything I see says that small form factor flash SSDs will have their greatest success in servers and arrays whose random read workloads play to flash’s strengths.
Robin