I’ve liked InfiniBand ever since I learned about it at YottaYotta in 2000. The switches are fast and cheap, the latency very low and the bandwidth – 6 GB/sec full-duplex at 12x – stunning. (Cisco has an excellent technical overview introduction here.)

One thing it didn’t do, though, was handle distance. Even fiber-based IB was limited to a few hundred meters. A great computer room interconnect, but not so good for the disaster-tolerant configurations that YottaYotta’s cluster-based RAID controller was hoping to address.

YY made due with gigE links, and managed some impressive demonstrations of terabyte long-distance data transfers. Just the thing for a long weekend at the lake.

Of course, there is a downside
Infiniband was designed to be more a fixed resource like Fibre Channel than an easy-come, easy-go WAN like Ethernet. Five years ago the management was less than optimal. Some 3rd-party tools were available from Voltaire – hey, guess who’s going public! – but most folks ended up writing their own management. But if you want an “always on” network this isn’t a big problem.

Ideally, InfiniBand would at least offer metro are networking for redundancy. I don’t think you can buy it yet, but long-haul I-band may be coming.

Enter Obsidian Research
Meanwhile, up in northern Alberta, one of YY’s former whizzes, David Southwell, formed Obsidian Research, dedicated to taking I-band long-haul. The company says

Longbow XR allows arbitrarily distant InfiniBand fabrics to communicate at full bandwidth through 10Gbits/s Wide Area Networks. The WAN connection is managed out of band, and except for flight time induced latency is transparent to the InfiniBand hardware, stacks, operating systems and applications.

XR achieves flow control by shaping WAN traffic and managing buffer credits to ensure extremely high efficiency bulk data transfers — including RDMAs — making the system a highly effective transport mechanism for very large data sets between geographically separated InfiniBand equipment.

In switch mode, Longbow XR looks like a 2-port switch to the InfiniBand subnet manager. A point-to- point WAN link presents as a pair of serially connected 2-port InfiniBand switches spanning the conventional InfiniBand fabrics at each site. A single subnet spans the Wide Area Network connection, unifying what were separate subnets at each site.

Longbow XR also provides an InfiniBand router mode — improving global system manageability, scalability and robustness. In this mode, each site remain separate subnets, with independent subnet managers, easing possible security and performance concerns associated with remote subnet management. 4x SDR InfiniBand provides just 8Gbits/s of data payload bandwidth; two totally independent Gigabit Ethernet links are also encapsulated across the WAN link to make full use of the extra bandwidth.

Longbow XR communicates over IPv6 Packet Over SONET (POS), ATM, and 10Gb Ethernet, as well as dark fiber applications.

Southwell is one of the smartest hardware engineers I’ve ever worked with. If he says he can do this, I’m willing to believe he can, given enough time. And if he’ll stop “improving” it and just ship.

The StorageMojo take
I-band has knocked about the industry for some time, a solution looking for that special problem that would provide volume and profits. With the growth of clusters – compute and storage – I believe it has found its niche. Long-haul I-band doesn’t solve distance latency problems, but it sure can move boatloads of data. As Google and others reach for 100x scaling, long-haul I-band could be a helpful tool.

After seeing that someone linked to this year-old post I took a look and discovered some needed edits and broken links which I’ve fixed.

Comments welcome, of course. What is the state of InfiniBand today?