The traditional model of NAS filers is handy if you only have a few. But once you get to 8 or 10 NAs filers your life gets complicated.

Your oldest data is on the oldest filer and your active data is on the newest. If that new filer bottlenecks your entire system slows down.

Hence the old saw “you’ll love your first filer and hate your tenth.” System administrators will load balance by moving data back and forth, an inherently wasteful and error-prone exercise.

A history of failure
A number of start ups – such as Z-force and Zambeel – have attempted a fix. The general idea is a switch that virtualizes the backend filers to create a single pool.

While the concept sounds good, results have been dismal. No storage system whose primary function was to front-end existing NAS boxes has succeeded.

Once more unto the breach
Now another entrant enters the fray. Avere Systems has raised $50 million and is on v3 of their tin-wrapped software.

At NAB 2013 they announced the FXT 3800 Edge Filer. The 3800 tiers across RAM, SSD, SAS, backend NAS and cloud across one namespace.

They’re understandably proud of their new SPECsfs2008 NFS result with a FXT 3800 32 Node Cluster that reached 1,592,334 Ops/Sec. That beats NetApp, Isilon, Hitachi/BlueArc and everybody else, except Huawei’s OceanStor 8500, which used 24 file systems and more than twice the number of SSDs get over 3 million Ops/Sec.

Oh, and they included a transcontinental latency in the network. As you might see using a cloud provider like Amazon Web Services, which was showing an Avere proto version in their NAB booth.

The hard question
After the briefing by Avere I asked Ron Bianchini, the CEO and cofounder, why Avere would escape the fate of their erstwhile predecessors.

I boiled his answer down to 4 points:

  • Avere’s appliance is a read and write cache, so hot data I/O is handled directly and not routed to the backend filers. Typically, he says, only 1 out of 50 I/Os leave Avere for backend NAS, and for some workloads it is as little as 1 out of 200.
  • Their file system is the client of the backend filers, so they know exactly where the data is at all times. Furthermore, they’ve certified vendors like NetApp, so they handle the inevitable corner cases.
  • The system moves data across 4 tiers – DRAM, SSD, SAS, SATA and the backend filers so it is capable of extremely high performance, unlike products that relied upon backend performance.
  • They also manage blocks within files, so a change in a file doesn’t require rewriting the entire file, a popular feature in large file applications.

The StorageMojo take
Rip and replace has never been popular. With today’s data volumes it is ever more unwieldy.

Avere’s performance and cost-effectiveness make it more than a simple pooling of NAS capacity: by reducing the load on current filers it extends their economic life while eliminating hot-spots and bottlenecks. You keep what you’ve got and make it faster and easier to manage.

Since most disk-based systems are way over-configured on capacity, this also means reduced CapEx and OpEx as fewer new filers are bought and less floor space, power and maintenance is needed. Given their scale-out architecture – minimum config is 3 nodes – you can add performance without adding more filers.

Bottom line: Avere, using 21st century technology, has built a new way to utilize existing resources while improving performance and reducing costs. That’s something no other NAS front-end ever managed.

They’ll do well.

Courteous comments welcome, of course. Any Avere users want to comment on their experience? I haven’t done any work for Avere, but that could change.