Over at Home » OpenSolaris Forums » zfs » discuss Robert Milkowski has posted some promising test results.

Hard vs. Soft
Possibly the longest running battle in RAID circles is which is faster, hardware RAID or software RAID. Before RAID was RAID, software disk mirroring (RAID 1) was a huge profit generator for system vendors, who sold it as an add-on to their operating systems. With the advent of hardware RAID systems the battle was joined until the hardware array emerged victorious. Software RAID has been relegated to low-end, low-cost applications where folks didn’t want to spend even a few hundred dollars for a PCI RAID controller.

It’s All Software RAID
Yet the fact is that it is all software RAID – it is just a question of where the software runs. Throw a lot of hardware (and cash) at a problem and even dodgy code runs acceptably. Yet the investment that requires is also the Achilles heel of hardware RAID: once you get everything working right on a specific platform you want to just keep selling it, even as the hardware becomes technologically obsolete. It no accident that EMC’s capacity-based pricing tiers made it uneconomic to fully expand a Symmetrix. The platform would max out well before capacity limits were reached because it was running on microprocessors that might be five years old.

Let The Battle Begin, Again!
So I’m excited to see the battle joined again. Server processors usually advance much faster than the add-on co-processors – with the major exception of graphics processors where gamer demand has driven incredible progress – so host-based RAID has a lot of built-in hardware investment behind it. ZFS offers a fundamentally re-architected RAID that is designed to overcome the traditional limitations of host-based RAID – which lacks non-volatile cache – by smart engineering.

So Does It Work, Already?
Short answer: yes. It is still early, both in ZFS development and in testing, but some highly suggestive numbers have been published here and here.

Robert tested against a modern, modular storage array, the Sun StorageTek3510 FC Array, which offers a gigabyte of cache and 2Gb FC. Not an HDS Tagma, but I’d guess that in performance it is pretty close and that it is mostly the scalability of the larger, enterprise systems, it lacks.


With Hardware RAID
Robert ran these tests on a Sun Fire V440 Server. He first ran the filebench and varmail tests using ZFS on the hardware RAID LUNs the 3510 provides, and ran each test twice:

IO Summary: 499078 ops 8248.0 ops/s, 40.6mb/s, 6.0ms latency
IO Summary: 503112 ops 8320.2 ops/s, 41.0mb/s, 5.9ms latency

Then he ran the same tests using the 3510’s as Just a Bunch of Disks (JBODs) and got these results:

IO Summary: 558331 ops 9244.1 ops/s, 45.2mb/s, 5.2ms latency
IO Summary: 537542 ops 8899.9 ops/s, 43.5mb/s, 5.4ms latency

Net Net
A strong showing by ZFS: ~10% more IOPS; ~10% lower latency; ~10% more bandwidth. Equivalent performance at a much lower cost. Promising news for ZFS adopters and those of us cheering from the sidelines.