Seattle-based F5 networks announced today their plan to acquire Massachusetts-based Acopia Networks. I think it is an interesting tie-up.
Is storage just another network service?
A trick question. Of course storage is, and of course, is isn’t.
Networks specialize in the transient. Storage specializes in the persistent. To the extent that networks enable storage, storage is a network service. To the extent that storage persists it is its own unique domain.
And please, don’t get started on the “network as giant delay line” argument.
Acopia sells a file server that front ends other NAS boxes
By virtualizing across multiple file servers, they are able to automate processes, such as tiering, that enable higher utilization of capacity. Which fits with F5’s focus on optimizing Ethernet networks for application delivery. How many applications rely on storage?
The financials
Acopia got over $80 million in funding, and F5 is paying $210 million in cash. It isn’t the 10-bagger of VC dreams, but 2.6x means everyone gets to play another day.
The StorageMojo take
A bunch of folks tried to build file virtualization boxes and most of them failed. F5 will get a nice boost by starting to tap the 45% of data center hardware spend that is storage.
The problem is that virtualizing NAS servers gets a lot less interesting in an NFS 4.1 world. Clustered NAS with scalable parallel data access eliminates many of the problems that Acopia set out to solve.
Widespread adoption of 4.1 is a couple of years off. In the meantime I expect F5 to do quite well with their new acquisition.
Comments welcome, as always.
Of course, the real question is: Does F5 make Acopia a stronger competitor to EMC’s RainFinity, or not?
F5 who?
With Cisco gobbling up Neopath, this is a smart move for F5. These file virtualization appliances are essentially IP storage load balancers /traffic managers, and F5 always made a very good load balancer.
It’ll be interesting to see how long it takes F5 to integrate it with it’s main appliances.
Anarchist – I assume that is ironic –
I don’t have an opinion about Rainfinity vs Acopia. I haven’t looked at either of them enough. Hooking up with a $3.6 B market cap and a 40% growth rate can’t hurt.
Max, I am unimpressed by Cisco’s OEM only strategy for FC switches. I don’t know what they have planned for Neopath, but at some point you have to engage with customers directly if you want to own the business. Networking guys generally don’t get storage – look how Intransa has struggled – not to mention Cisco’s weakness in storage.
Storage will never be part of Cisco’s or F5’s business. *Access* to storage is their business, but only if they go direct.
The low-intensity warfare between storage and servers in the datacenter shows no sign of ending. A crisp definition of the lines between the two is the best hope for a networking company to horn in on the massive storage revenue stream and margins.
If F5 markets it right, they could do very well. See networking guys comment above.
I’m not bashing networking folks. The two markets are just different in ways that the networking mindset has difficulty discerning. The reverse is also true. Access, not persistence, is the key.
Robin
This is probably the best future Acopia could have had. They were struggling to find a real home all alone. With f5 leading with load balancers, it fits nicely into that market. Good Stuff!
RE: Load balancers in general
Nice to have if you can get them. Nice to know when you need them
Every time I get in a “load balancer” discussion I’m reminded of the great, but short-lived, debate about whether a time division or statistical mux is the best way to go. Time division just came first. “Stat” muxes were always superior to time division.
So the question is, “Why are we screwing around with load_balancers when we need something better?” Load balancers came first? What comes next? When can I get it?
I’ve had to build my own for years. I never wanted to be in the “load balancer” software development, and certainly not in the software maintenance, business.
Where should load balancers operate? Do they communicate with each other? Is there an Open Standard?
Can I use Thin Provisioning as a load balancer?
I want this as a standard service handled by the (Storage? or App? or network? Other?) SLA.
Why don’t disk drives talk to each other? We don’t want them that smart?
Why don’t disk drives have SSD and Flash in them for quick rebuilds?
Maybe all the engineers that could do this were on a bridge “too far” somewhere?