Infinite io’s Network Storage Controller (NSC) is a rarity in enterprise storage: an original and unique device. It turns your file storage network into a software defined resource. But it’s not a file server, a caching controller or an intelligent front end to existing storage resources.
The NSC differs by applying deep packet inspection and analytics to network storage. First, the NSC scans all back-end file systems and storage to understand network layout and storage options. Then the NSC’s ultra-low latency layer 7 proxy uses wire-speed deep packet inspection to see all storage network traffic.
This approach to file storage that gives the NSC some unique capabilities:
- The NSC sits in the network, between servers and storage without changing the network layout.
- It works at wire speed, so it doesn’t affect throughput.
- Unlike traditional storage virtualization, it is transparent to the network – if it fails it automatically passes through all packets to the local file storage, with applications and file servers unaffected.
- The NSC manages storage traffic with much more granularity than array controllers, while also supporting multiple back end file servers, including cloud storage.
Cloud integration
The seamless integration of cloud resources is the NSC’s strength. Cloud storage is considerably less costly than local storage arrays, but concerns about security, availability and performance give many IT pros pause. How does the NSC manage these issues?
Security
Files are broken up into sniblets or chunks before being compressed, encrypted and moved to cloud object storage. Each sniblet has its own key, so even if an attacker gathered all the sniblets of a file, they’d have to decrypt each one and then piece the file together. You can place the sniblets on multiple cloud providers to make attackers work even harder.
Availability
In normal operation, the NSC handles all metadata operations from its internal flash storage, including cloud data and all cloud-based data appears to be local to applications. When a cloud-stored file is requested, metadata is served locally while the NSC begins streaming the data.
All metadata and state information, are stored in the flash as well as – at your option – in external local and/or in cloud storage. A server app can access the state data if the NSC fails, enabling quick recovery from hardware failures. Encryption keys are stored locally in a tamper-proof TPM chip for quick recovery and added security, and that chip can be backed up as well.
Performance
A single file’s sniblets can be placed across multiple cloud providers, enabling parallel access to the file. File and sniblet level ECC enables files to be rebuilt before all sniblets are downloaded – handy in case a service is down or slow.
What makes the cloud integration so powerful is that you define what files get moved to the cloud – based on activity, age, size or priority – and the process is entirely transparent to any application that uses file storage.
The StorageMojo take
I’m often underwhelmed by those applying network technologies to storage. Networks work with copies; storage with originals. Those are two very different worlds when data needs to be recovered – and in the strategies needed to minimize the need for recovery.
Assuming the NSC works as advertised, it has important advantages over competing front ends, such as Avere, because, for instance, you can deploy it in stages. In display mode it surveys your storage and estimates how much you could save by moving cold data to the cloud.
If that feels good, move to metadata mode, where the NSC accelerates metadata operations using its internal flash, while passing through all updates. Finally, switch on access to public or private cloud storage, choose your file migration policies, and start taking advantage of the cloud’s economies of scale.
Courteous comments welcome, of course. I’d love to hear from people who’ve tried this device. Please provide enough info – which I can keep confidential – so I can be sure you’re real.
Note: This post is based on a white paper I wrote for infinite io, but the opinions are my own.
“…if it fails it automatically passes through all packets to the local file storage, with applications and file servers unaffected.”
How does it handle a failure when the data is moved to cloud storage? It sounds very interesting, and the ability to split data among different providers with ECC definitely has appeal. I probably just don’t understand how it works yet, but wouldn’t recovering from a failure in either a single or a split cloud storage scenario be a potential nightmare because of the total dependency on the perfect operation of this device?
As I understand it, the metadata for the cloud sniblets is store on the TPM chip, which can – and should! – be backed up as well, in case of device failure. The infinite io server app can then recover the cloud data. Ii is likely working on a cleaner solution.
infinite io backs up state information to both the attached object store and local filer(s). If a unit fails and a new unit is put into service, state can be recovered from either the objet store and/or filer allowing a new unit to come up quickly since it will only need to reconcile deltas that happened to the local storage during the downtime. In case of a meteor strike on the datacenter, cloud migrated data can be accessed with a server app we provide. Our clustering upgrade will give even more fault-tolerance.