Flash memory is opening a second front in its war on entrenched storage technologies. So far disks have been taking the heat, but DRAM is the next target.
The unannounced Sun F5100 product uses 80 48 GB flash SO-DIMMs to create a 4 TB cache appliance. Cool.
But once the SO-DIMMs are on the market vendors will want to put them in notebooks. The question: will they displace more disks or DRAM?
Isn’t “enhance” nicer than “replace?”
The flash DIMMs are accessed as disk drives through a thin driver, which makes them fast. As a disk look-alike they aren’t a direct replacement for RAM. But a 2 or 4 GB SO-DIMM DRAM is plenty for most folks.
If a flash DIMM is configured as the boot device users will get most of the advantages of a more expensive SSD in a smaller and cheaper package. For many users 48 GB of main capacity is plenty, making a disk optional.
Also, flash is a lot cheaper than DRAM. 4 GB notebook SO-DIMMs have just come to market priced at over $100 per gigabyte. How the flash SO-DIMMs will be priced is still a mystery, but since the memory chips typically are over 90% of the cost of a DIMM, a 48 GB SO-DIMM should be less than $200.
Expensive compared to a disk drive, cheap compared to DRAM. But not cheap enough for netbooks.
Peaceful coexistence?
In 2 years notebook vendors could be offering a scaled-down version of the traditional enterprise tiered storage architecture: high-speed DRAM; fast SO-DIMM for booting, application and scratch storage; and a large capacity hard drive for bulk storage. Faster performance, more capacity, lower power consumption and cooler notebooks. It could be good.
However, as the bootable SD card slot in the new MacBooks shows, there is more than one way to get flash into a system. Some of us would like to be able to easily swap out boot drives on as the cards, but most of us will prefer to have our boot drive inside the notebook where it won’t can’t get lost. SB cards will also continue to have a significant cost advantage over flash SO-DIMMs due to high volume.
The Storage Bits take
None of the players are standing still. In 12 to 18 months that 48 GB flash DIMM will be 96 GB or more. The current high price for the 4 GB SO-DIMM will drop to reasonable levels. And drive vendors will be offering 1.5 TB 2.5 inch drives.
Of course, we don’t know how well the new flash DIMMs will perform. Much depends on the quality of their embedded flash controllers, which often isn’t too good in first-generation designs.
Flash DIMMs will take a piece out of both the DRAM and disk markets, but how much of each remains to be seen. As usual, it is a dogfight in the storage business, and this on could get really nasty.
Here’s a thought: how about a 50GB notebook with 1 TB disk? Technically, 48 GB is flash, but since it is on an SO-DIMM and most consumers don’t know the difference between DRAM and disk, let alone flash, is it really that wrong? If it is defensible, expect storage marketers to do it.
If I were Fusion-io, I’d be looking into the flash DIMM business. Know them, they already have.
Comments welcome, of course. I worked in Sun’s storage group for 3 years in the mid-90s. I’ve done work for Fusion-io in the last couple of years.
How about Intel’s recently announced approach of putting the flash on the MB to accelerate any old SATA disk, Adaptec announced something along the same lines recently as well
http://www.theregister.co.uk/2009/09/09/adaptec_maxiq/
Up to 128GB of SSD per RAID controller.
I’d like to see something like that bolted onto fiber channel cards or something.
Fusion IO has some device that’s supposed to come out that will allow the cards to talk to each other over 10GbE or something, sounds interesting as well for mirroring and stuff, my NAS vendor plans to investigate it and hopefully integrate it into their systems.
I wonder how performance would compare vs doing something like block based dynamic storage tiering that Compellent has, and EMC seems about to get(in somewhat limited form initially).
Robin, I heard Leo Laporte talk about this with an OptiBay…use the space for your dvd drive and put in a hard drive
http://www.mcetech.com/optibay/
Looked at from the (silicon) supply side, the answer to whether Flash be more successful displacing HDD or DRAM might be “neither”.
In the last several months, a number of top brains in the industry have looked at NAND Flash in the “Enterprise” context., specifically focusing on supply-side economics, and the challenges of scaling down to sub-50NM. These include IBM’s Steven Hetzler, SanDisk’s Eli Harari, and a number of others.
Their perspectives are profoundly gloomy, and (worse yet) — universally ignored by the Storage Industry.
For the latest example, a few weeks ago Sun’s Lead Technologist for Flash, Michael Cornwell presented at the Flash Memory Summit. He pointed out THE elephant in the room — one that has been known and ignored for many years now.
NAND Flash performance — specifically those attributes most important for Enterprise applications — gets worse as capacities go up and as cost/GB goes down. Dramatically worse. Spectacularly worse.
Here’s what Cornwell said about what he called the NAND Flash “lithography death march” (directly from the presentation):
– Flash Endurance = 1/10th of 3 years ago
– Flash Write Speed = 1/4 performance of 2 years ago
– Flash Read Speed – 1/6 performance of 2 years ago
Here’s the killer: “NAND will have higher latency than HDD in 2 (lithography) generations”.
And this is from Sun’s senior Flash guy, and he’s dead right.
For those who weren’t at the Flash Summit, here’s a article that summarizes Cornwell’s comments (though they mangled the latency comment)…
http://www.eetimes.com/showArticle.jhtml?articleID=219200284
I applaud Sun’s approach to Flash — but will we ever see 48GB Flash DIMMs for $200? Google turns up $1,749 for Sun P/N #371-4531 “CRU,ASSY,24GB SATA FMOD”
Today, customers are being asked to plunk down as much as $25,000 for 146GB “Enterprise SSDs”. Some are doing so — based on a false promise — that capacities are going to go up, costs are going to come down, and performance is going to get better — or at least stay the same.
The industry knows this ain’t gonna happen. Isn’t it about time to fess up?
@KD Mann:
A lot of things have been brought up in your comment, but most should not be interpreted the way you are interpreting them.
1. The flash manufacturers(the IC manufacturers) hate lithography deathmatch. It shrinks profit margins, and it is really about volume economics, where the largest volume supplier eats little volume supplier’s lunch. If you look at the panel of “experts” and their relative position of their flash manufacturing, I would say they are simply defending their turf. IBM, SanDisk, Stec all have something to lose to Samsung and Intel because of volume economics, hence, they are trying to scare people into thinking that it will get worse. SanDisk for example is not pushing down price the right way. Instead of producing more to drive the price down, they are shoving more bits(4 bits per cell MLC) onto their chips. This is the crappiest engineering I have seen. Samsung on the other hand, is increasing reliability to 1 Million cycles for SLC, so I am not sure what Michael Cornwell is talking about.
2. Flash SODIMM 48GB modules can be 200 dollars if they wanted. But in order to do that, they have to move to 2bit MLC. OCZ can do it today if they wanted to since they are offering 60GB SATA drives for $150. I don’t see anything that prevents them from offering 48GB MLC SODIMMs next year for $200.
3. The price you quoted for Sun 24GB FMOD….hehe, if you haven’t figured out, Sun likes to charge a 300% premium on anything they source. Then they will source the most expensive crap so that they can get 300% premium over that premium. It is a multiplier trick. I hope Sun’s MBA should stop doing shady things like this and drop the dumb ass Stec SSDs in the toilet. DIYers have been using X25-Ms and X25-Es interchangeably in 2.5 inch hotswappable JBODs for the longest times.
@TS,
1) For SLC, the ONLY way to increase capacity and (reduce cost/GB) is to reduce feature size (the litho death march) — but we’re near the end, so SLC is ultimately a dead end. For MLC, the only way forward is more bits-per-cel. For both technologies, performance gets worse as we go along, but the ONLY meaningful future cost decreases are in bits-per-cel. There is speculation about 3-d lithography (go vertical), but the performance slide still remains.
2) Your suggestion that “producing more to drive the price down” ignores a fundamental reality. Every economy of scale in the NAND flash industry has been followed to (and beyond) the point of diminishing returns. That’s something that everyone (including Samsung) agrees on. Suggest you look at Dr. Hetzler’s research first, and make comments about it (and not him) after you have seen it.
3) Could someone sell Flash modules at 10% margins (~$200) “if they wanted to”? Sure, but the Enterprise SSD market that is driving STEC to $2 billion market cap is predicated on 80% gross margins (several hundred percent “premiums” to use your parlance). There is NO incentive anywhere in the ecosystem to do this, so I think people who suggest this are merely fueling unrealistic customer expectations about future of Flash.
1. I agree with you that SLC is ultimately a dead end. My opinion is that Intel and Samsung will hit a brick wall at 16nm process shrink(32nm and 22nm are both pretty much solved), hence can no longer increase in speed and density via shrinks. I am a major fan of Adam Levanthal, where he proposed the Hybrid Storage Pool model of using flash. It is my belief that by 2016, we will replace the SLC ZILs with PRAM ZILs. because we only needed a very small portion of Solid State device for the write caching(30 seconds of writing or less than 50GB or so). So mirroring PRAM ZILs in 2016 using 22nm lithography can take care of the write caching portion of the ZFS HSP. Now the read portion can be put on MLC because you still have redundant hard drives below the L2ARC to preserve data. MLC L2ARCs can fail in a bunch and still be good enough. I sure hope that we stay at 2bit MLC for a long time, but multi-bit MLC can be used for L2ARC flashing tier and its reliability won’t be a problem.
2. I won’t go into details why I think both Stec and SanDisk will die before we hit the lithography wall in 2016, but ultimately, I don’t even view them as a competitor in the SSD arena right now. In the foreseeable future, it is intel vs Samsung+Indilinx+OCZ. All the proprietary form factor players like Fusion-IO and Sun SODIMM will die off because nothing can achieve the volume economics faster than 2.5in hotplug SATAII/III because of the laptop market.
3. Hence STEC’s market cap isn’t sustainable if they rely on the 80% premiums. Neither is Sun. Robin’s estimation of how much the F5100 will cost is conservative and reasonable. (Market price for SLC is $10/GB. So 4TB of SLC is about 40K. Tag on a 100% premium gives you Robin’s estimation of 80-90K). But looking at the 24GB FMOD price of 1.7K per DIMM, you are talking about 150K for a 2TB F5100 and 300K for a 4TB F5100. That’s 300%+ premium over Robin’s conservative price target. One will ask himself why 4TB of SLC is worth 300K when you can fill 8 HP MSA50s with 80x 32/64GB Intel X25-Es, and get 2.5TB/5TB of Intel SLC for roughly 30K/50K or so(less with Intel’s 34nm shrink coming out in a few months) There is no statistical evidence that says Intel’s SLC is worse than Stecs.(BTW, those who argue that ZeusIOPs is 8x faster in writing than X25-E can shut up. Intel X25-E G2 will likely implement power safe caching and do the same thing the ZeusIOPs does to get the high writes)
I worship Jeff Bonwick/Adam Levanthal/Brendan Gregg. Sun’s weakness has never been about engineering. It is industry’s common view that Sun’s sale to Oracle has been the result of greatest engineering talent ruined by moronic MBAs with their non-transparent pricing.
I’m not sure the SO-DIMM form factor will have anything to do with this; as far as I understand Sun’s specs, the modules are not a drop-in replacement for normal memory modules.
A smaller form factor for flash storage is something that’s desirable, but (IMO) SO-DIMM would not be the right choice for consumers; can you imagine the confusion if someone wants to add RAM to his netbook and sees two identical slots that will both accept his SO-DIMM?
I think the most likely trend for netbooks will be to accept 1.8″ SATA devices; there are now traditional HDD’s available in sizes up to 160GB, which is enough for the average netbook, and this is an acceptable form factor for SSD’s as well. Even more interesting would be a netbook with two or more 1.8″ connections, allowing you to mix SSD and HDD devices.
@TS,
Getting back the topic of Robin’s post. Will NAND Flash take more market share from HDD or from DRAM? If NAND flash continues to get slower as feature size shrinks the answer could be neither.
To your comments: when Sun’s Michael Cornwell said ““NAND Flash will have higher latency than HDD in 2 (lithography) generationsâ€, I believe the two generations he was referring to are 34/32MN and 22NM.
It doesn’t matter as to when the lithographic process shrinks (production technologies) are worked out. The problem is that the fundamental nature of NAND is that =each= generation is dramatically slower than the last. Beyond 22NM, there won’t be any NAND Flash performance advantage over HDD, and certainly no competition for DRAM.
As far as when…I think 2012 is more likely than 2016. Consider…in 2006, Intel/Micron’s NAND production was at 72NM, they skipped the 4x node and went to 52NM in 2007, and began shipping 34NM early this year.
That’s three generations in three years. We’ll be at 22NM in mid-late 2010.
NAND will definitely take more market share from HDD for sure. (RAM slots will be maxed anyways for a DB design) If you looked at recent Exadata v2 design, even Larry Ellison recognized that 7200rpm fat SATA drive is the way to go here. So margins will shrink for 10K and 15K drives, and then they die off.
Actually looking at Intel X25-m specification between the 50nm node and 34nm node, Intel’s MLC actually sped up about 20%. I think when Michael Cornwell said, “flash will get slower”, he was referring to Sandisk relentlessly adding multiple bits per cell to compete. If you double the bits per cell, you get a geometric degradation in performance, so by the time Sandisk doubles to 8bits per cell, it will be comparable to 15K SAS drives in terms of access time.
If we stay at 2bits per cell MLC level, each shrink will only add performance. Personally I don’t ever see the need to go beyond 2bit MLC, even if it means $/GB of SSD will fall a lot slower than projected, especially in the enterprise environment.
@anonymous
re: your comment — “…when Michael Cornwell said, ‘flash will get slower’, he was referring to Sandisk relentlessly adding multiple bits per cell to compete. ”
– I’m guessing you didn’t see his presentation. No…Cornwell was explicitly referring to lithography, not bits-per-cel.
Re: “If we stay at 2bits per cell MLC level, each shrink will only add performance.”
No…each shrink reduces performance, independent of bits-per-cel. Toshiba’s Ken Takeuchi (inventor of MLC Flash) lays it all out here:
http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=4585977
Ironic to think that we’ve been talking for 30 years about the I/O density problem with disks, where performance increases only very slowly while capacity increases dramatically — resulting in an absolute decline in IOPS/GByte. Now here we are Flash, where performance actually decreases as capacity scales, and “NAND will have higher latency than HDD in 2 (lithography) generationsâ€.