Xsigo (see-go) produces an I/O consolidation appliance whose elegance impresses.
I/O clutter
Typical blade servers have several I/O adapters for networks and storage. Today’s multi-CPU – each multi-core – mobo’s need much bandwidth to stay busy, thus 2-4 GigE or 10GigE network ports and 2 or more SAS or FC HBAs configs are common.
Each HBA/HCA eats slots and power, adds cost and makes I/O a pain to upgrade or replace. Xsigo offers an alternative.
Big cheap pipe
Built on 20 Gb/s DDR Infiniband, Xsigo replaces physical NICs and HBAs with virtual ones configured on the fly. Xsigo says that the Infiniband is not visible in daily operation.
They physical I/O is implemented in Xsigo’s I/O Director, a 15 slot box with 24 non-blocking DDR Iband ports for server connection. The slots support your choice of single-port 10GigE, dual-port 4 Gbit FC or 10-port GigE I/O modules.
Each 10GigE module supports up to 128 vNICs. The FC module supports 128 vHBAs. And the GigE module can support 160 vNICs.
Xsigo says you can do most anything with the v-adapters that you can do with the real thing: jumbo frames; LUN masking; link aggregation; VLANs; SAN boot; and QOS features like committed information rates.
Here’s the cool part: the v-adapter addresses can dynamically migrate with a specific VM. Big improvement over the default VM-only migration.
The StorageMojo take
Good to see Iband used as a big cheap pipe. Its low latency, cheap switch ports and high bandwidth make it the best choice for this application.
VMware and Hyper-V have serious I/O problems. Xsigo helps manage them.
Courteous comments welcome, of course. Xsigo was one of 10 or so sponsors that brought me and 15 other bloggers to Silicon Valley last week. They probably have some competition, but I couldn’t find them by Googling. Let me know who they are.
What about Xen? Is the Xigo layer completely transparent to the hypervisor? I don’t like vendor lock-in and you only mention Hyper-V and VMWare 🙂
So the two questions/issues I had were:
Are they using standard IB adapters in the hosts, or are they special Xsigo adapters?
Do they require a host based driver to function? If so, what operating systems are supported today?
I *think* Xsigo relies on Mellanox for Infiniband adapters and driver support, and the Mellanox site says “RHEL, SLES, Windows, HPUX, ESX3.5” support.
Blake, I mentioned Hyper-V, but it isn’t clear to me that Xsigo supports it today. But at the rate Hyper-V is gaining share it is a good bet that they – or Mellanox – will do so in good time.
Robin
Here is the Xsigo compatability matrix – they seem to support a range of IB adapters (Mellanox, Voltaire, IBM, Cisco, HP) and blade chassis IB swicth modules. I assume they must have custom drivers for all of these:
http://www.xsigo.com/_downloads/Spec_sheet/interoperability.pdf
Operating system support seems to be limited to Novell Linux, RHEL (XEN is mentioned too), VMware, and Windows 2003 + 2008
Check out Virtensys who do similar IO virtualisation to Xsigo except using PCI express:
http://www.virtensys.com/
You asked what competitors this product has. Isn’t HP’s virtual connect and Flex10 technology a somewhat direct competitor? It’s not exactly the same because it is tied to HP Blade servers, of course, but it performs similar functions for those servers.
-J
PS disclosure: I work at HP, but in the storageworks division; I am not connected to the Blade or virtual connect teams.
PS2 Thanks for a great website–I read it often and always learn something new.
Like HP (if JD is right above), Cisco has a product coming for their UCS platform that does basically the same thing. Riding DCE (CEE, whatever), you can carve out vHBA’s and vNIC’s which follow the VM around. It’s VMWare only (and will be, for the foreseeable future), but it is something.
–Jason (It’s also Cisco exclusive, so there’s that too)
Great questions here! Let me expand on the answers already offered.
First on Xen. Yes, Xsigo is transparent to the hypervisor. It appears as any physical NIC or HBA would. The only question is driver availability. On that front, yes, Red Hat Xen is supported, as well as VMware ESX 3.5 & 4.0, Linux, Windows, HyperV, Novell. Support is continuously being expanded, as well.
The host adapters are Mellanox, and are available from Xsigo and also from virtually every server maker. Most vendors also offer mezzanine cards for blades and switch modules as well. So Xsigo works across most every X86 server and blade system. Both 20G and 40G adapters can be used, and the cost on these is surprisingly low given the performance.
On our blog, we’ve started a series on virtual I/O technology basics. A good place to start if you’re interested in learning more about this exciting new space! http://www.xsigo.com/blog/
– Jon
Check out – 3leaf systems for approximate competition
http://www.3leafsystems.com/
Xsigo does have an interesting product but describing it as a big cheap pipe is just not accurate. I’ve seen quotes for minimally configured systems running into 6 figures. There is positively a cost barrier with this technology. One of the cool things about Xsigo is the ability to consolidate fibre channel and ethernet networks into 1 IB connection to the host. However in environments moving to FC0E with native FCoE targets, this benefit is lost or can also be provided by Cisco Nexus switches in mixed ethernet/FC environments for considerably less money.
John, I haven’t looked at Xsigo’s pricing, but they said they work on a cost displacement basis. My comment was based on Infiniband prices, which are competitive for the bandwdith.
Robin
Dan,
3 Leaf withdrew their I/O Virtualization product from the market well over a year ago. I had ordered their product for eval into our labs, but they canceled my order.
John, if one wants a big, cheap pipe, one can just plop in Infiniband adapters and go to down with IP-o-IB, which is currently supported by VMWare. That only leaves one with the need for an IB switch and an IB-to-IP Gateway device. The latter can be rolled by hand with any Linux box (or pair of Linux boxen, using something simple like Redhat Cluster Suite or your favorite clustering tool, if you need HA).
Xsigo’s solution sounds like an exact “respin” of Topspin.