This morning Woven Systems announced their new 10 Gbit Ethernet switch. I named Woven “coolest hardware” at last years Datacenter Ventures conference. Harry Quackenboss, their CEO, promised they’d have the switch working in six months. Well, here it is a mere seven months later, and they’ve done it. My hats off to the engineering team.
Now let’s get into Woven’s Mojo.
I’d rather switch than fight
The switch is unique is several respects:
- 10 Gigabit ethernet only
- Up to 144 non-blocking ports on a single switch
- Up to 4,000 non-blocking ports in a fabric of Woven switches
- Built from commodity parts – with one vital exception
- The killer feature: active congestion management
- Uses standard ethernet protocols
What is it going to kill?
It shouldn’t be a surprise that fibre channel has some features that storage systems find really useful. After all, FC was developed as a storage interconnect. So it has bandwidth, flow control, low latency and rapid failover.
Gigabit ethernet lacks in all these areas: limited bandwidth; lost packets in congested networks; high IP latency; and failover that is too slow for storage drivers to manage.
It looks like Woven has solved 3 of the 4
Woven’s secret sauce is built into an ASIC that sits in front of the commodity 24 port ethernet chip (picture helpfully provided by Woven).
The vScale Packet Processor – I don’t know what the “v” stands for – inserts low-overhead probe packets into the data stream, which the vPP at the other end of the stream, be it in the same switch or one across a fabric, bounces back, so the originating vPP has a real-time view of path latency. In milliseconds or less. It works across a fabric of up to 4,000 ports, ensuring that QoS even as the fabric grows.
That’s pretty cool, but the coolest thing is this:
When path latency is too high, the vPP has two tools it uses to manage the latency.
- It can change to a less congested data path in less than 10ms
- It can pause the HBA using a standard ethernet protocol
I know what you are thinking:
Wow, path failover in 10ms – drivers won’t even notice.
Pausing HBAs when congestion strikes is flow control for ethernet – a process FC handles with buffer credits.
All done using standard ethernet protocols, albeit creatively.
That bell you hear is tolling for Fibre Channel, which is about to meet its toughest competitor yet. Which may be why the FC over ethernet proposals are gathering steam in the T11 committee. Adding FC’s low latency protocol to a very fast and reliable 10 Gb switch adds real value and helps protect existing FC investment. Could be a nice win for all involved.
The StorageMojo take
I’m sure all the usual Internet Data Center suspects are lined up to beta Woven’s switch. Linking several hundred thousand servers via ethernet requires a lot of bandwidth, and 10GigE delivers. For the massive storage clusters it is an even bigger win: lost packets are still a pain even if the cluster can survive them.
If everything works as advertised, FC’s decline may be faster than forecast, at least among the large enterprise base that can use a switch of this size. Woven’s switch will be a shot in the arm for big clusters and the people who build them.
Update: I’d inadvertently left out the fact that you can cross-couple the switches to create a 4,000 port fabric so I’ve added it.
Update II: Harry, Woven’s CEO, helpfully added some budget pricing for all you folks with new fiscal years starting mid-year – like the Cisco tear-down guys – and I couldn’t just leave it buried in the comments.
Pricing will be finalized when general availability is announced (planned for Q3 2007), but a 144 10GE port configuration will be about $1500/10GE port, with fully-redundant fans, power supplies, and management cards.
Comments welcome, of course. I spent six hours at NAB today and drove over 1,000 km, so moderation may be a bit sluggish today. Me too.