StorageMojo’s Best Paper of FAST 2017

by Robin Harris on Wednesday, 1 March, 2017

StorageMojo’s crack analyst team is attending the Usenix File and Storage Technology (FAST) ’17 conference. As usual, there is lots of great content.

But only one paper – this year – gets the StorageMojo Best Paper nod. The conference awarded two Best Paper honors as well – so you have plenty of reading to catch up on – which are Algorithms and Data Structures for Efficient Free Space Reclamation in WAFL and Application Crash Consistency and Performance with CCFS. These are fine papers, but StorageMojo likes another one even better.

Redundancy Does Not Imply Fault Tolerance: Analysis of Distributed Storage Reactions to Single Errors and Corruptions by Aishwarya Ganesan, Ramnatthan Alagappan, Andrea C. Arpaci-Dusseau, and Remzi H. Arpaci-Dusseau, all of the University of Wisconsin, Madison.

From the abstract:

We analyze how modern distributed storage systems behave in the presence of file-system faults such as data corruption and read and write errors. We characterize eight popular distributed storage systems and uncover numerous bugs related to file-system fault tolerance. We find that modern distributed systems do not consistently use redundancy to recover from file-system faults: a single file-system fault can cause catastrophic outcomes such as data loss, corruption, and unavailability.

The researchers built a fault injection system to test the file systems. Earlier studies have looked at ZFS, Ext4 and other, so this paper looked at other popular systems: Redis, ZooKeeper, Cassandra, Kafka, RethinkDB, MongoDB, LogCabin, and CockroachDB.

The paper’s chief conclusion:

. . . a single file-system fault can induce catastrophic outcomes in most modern distributed storage systems. Despite the presence of checksums, redundancy, and other resiliency methods prevalent in distributed storage, a single untimely file-system fault can lead to data loss, corruption, unavailability, and, in some cases, the spread of corruption to other intact replicas.

Yikes!
That doesn’t sound good, and it isn’t. For example:

. . . a single fault can have disastrous cluster-wide effects. Although distributed storage systems replicate data and functionality across many nodes, a single file-system fault on a single node can result in harmful cluster-wide effects; surprisingly, many distributed storage systems do not consistently use redundancy as a source of recovery.

The StorageMojo take
These conclusions should concern any user of scale-out storage. And if you are a developer of scale-out storage, you should certainly read this paper and the CCFS paper.

But there’s a larger, systemic issue, that file system developers need to address. For some reason – inertia probably – the file system community has been slow to embrace formal verification methods. That’s just silly.

Our digital civilization depends on our file systems. It’s past time to bring them into the 21st century.

Courteous comments welcome, of course.

{ 1 comment… read it below or add one }

Ian F. Adams March 2, 2017 at 9:53 am

There’s a reason that formal verification isn’t seen much outside of things like medical systems and avionics. Short version, it’s not easy.

That said, i think its reasonable to say that small, critical, portions of a file system could benefit from formal verification. At large scale though, I don’t think it’s realistic given the size of the code bases in question.

At least personally, my experience with formal verification, while purely academic, was painful and slow. To do it well and rigorously requires a fair bit of expertise, and even small code sections can take a long time to verify, let alone correct if they’re wrong. Then tack on that any changes may require updates and changes to your verification schemes…

Leave a Comment

Previous post:

Next post: