Fast, high-capacity flash drive
Fusion-io’s impressive demo at DEMOfall07 piqued my interest (see Fusion io – great demo. Now comes the hard part.) and skepticism. They’ve announced some pricing and refined the specs.
The ioDrive
The ioDrive is a PCIe x4 card with 80, 160 or 320 GB of NAND flash. With a claimed performance of a sustained 87,500 8k IOPS – down from the DEMOfall claim of 150,000 IOPS – the ioDrive offers fast relief from disk latency.
While not as fast as RAM with a device latency of 25us – I’d guess driver latency would be more – the non-volatility, higher capacity and competitive price make it a reasonable substitute for more server RAM. Retail pricing starts at $2400, which is the $30/GB they promised. I’d assume the larger versions would have lower $/GB pricing.
The card is supported on Red Hat AS 4.0 and SUSE ES 10.
The StorageMojo take
Flash drive performance on single-user systems has not lived up to the hype. But servers are another story.
The ioDrive gives low-end servers high-end RAID I/O performance at a much lower cost and footprint. Price competitive with RAM the ioDrive offers power-limited data centers another way to increase I/O performance without adding power-hungry arrays.
Kudos to Fusion-io for figuring out how to harness the promise of NAND flash in the real world.
Comments welcome, of course.
This could be useful for transaction logs for databases or file systems (e.g., you can specify which device(s) to put ZFS’ intent log on). Things could then be shuffled off to spinning rust at a more “leisurely” manner.
Depending on “request to data” latency…. useful for database indexes, near ram caches (not as fast as ram), good for high performance OS implementations i.e. read only memory mapped files like dlls and .so (first shut off auto-updates), web server – less in ram file caching. DNS, LDAP, Active directory databases. It could work well.
I went benchmarks!
We’re thinking of using these mtron drives:
http://feedblog.org/2007/12/13/ssd-vs-memory-the-end-is-nigh/
in Spinn3r (http://spinn3r.com)
which have 80MBps write throughput.
is the 87k here write IOPS or just read?
Their numbers would yield > 700MB/s.
Does anyone have any standardizes IO benchmarks on these drives?
Here is the catch. Standard NAND flash has a maximum write-erase cycle life of about 100,000.
In normal circumstances, IE your pocket thumbdrive, you’ll almost never come up against this limitation. However when running an OS from this disk, or using it in an IO intensive application, my guess would be that you’ll hit that ceiling quickly.
During a conversation with the boys at Fusion IO they assured me that data was secure, because when the system becomes unable to erase a block it partitions it off. This may be the case – (Easy enough to, on a write-fail, to issue the write-erase to a different location, and re-store the data there.)
I don’t know – my other question is that PCIe is limited to PC Servers. I don’t know to many non-X86 architecture systems that have PCIe. Seems like a pretty limited chunk of the market. (And that being said, the chunk of the market too cheap to buy a real server isn’t going to plop down $19K for 640G of super-fast disk – with the sole exception being VMWare applications.)
There may be many cases where the new Intel drives are cheaper and faster.
For example if you are doing video editing, the IOPS of the X-25e in a 4xRAID configuration is plenty fast, and the transfer rate would actually be higher than the Fusion-IO.
ecards,
Hopefully someone will publish a first-hand account of four Intel X-25e SSDs striped together. Although the SATA signaling is 300MB/sec per channel, I wonder which SATA controllers can support the potential throughput of such a configuration.
According to the techs at Fusion IO, the advanced wear leveling algorithms allow the drive to be pounded on 24x7x365 for 7 years without loss of data.
They recently increased their prices too. $7200 for 160Gb version.
I would definitely not use this product to replace any of my 4TB+ file servers.
Who is able to re-outfit their datacenter with a PCIE card? It’s not like it plugs directly into an SAS/SATA/SCSI hard drive slot. Servers are not designed to have many PCIe slots, particularly in a space restrained rack environment.
Besides the 1Gbs LAN network bottleneck might defeat its purpose on a file server.
Regardless of the current limitations and ‘buts’ I am really excited that SS has made such a dramatic leap forwards in the last year. Of course they need to be bigger to compare with their spindly brethren but everyone has to admit its going in the right direction. After what, 25 years of no real advancements in the underlying disk technology theres a new era coming!! Bring it on I say.. 😀
It’s interesting to see where this bottleneck in most home systems is leading to years down the road. When they figure out the bootable drive (quite the hurdle) and deal with any data loss issues (data isnt on disks anymore), I could see using these in all sorts of home and business applications.
The other little hidden gem here is that with normal disk drives, the increase in capacity is linear. So you go from a 80-100-120-150 etc gig HDD. With these things tend to just double in size with each itteration. The capacity limitation is already equal to normal HDD.