Liqid’s composable infrastructure

by Robin Harris on Monday, 8 May, 2017

The technology wheel is turning again. Yesterday it was converged and hyperconverged infrastructure. Tomorrow it’s composable infrastructure.

Check out Liqid a software-and-some-hardware company that I met at NAB. The software – Element – enables you to configure custom servers from hardware pools of compute, network, and, of course, storage.

I met Liqid co-founder Sumit Puri at NAB 2017, and had a concall with him and Jay Breakstone, co-founder and CEO, last week. From long experience I’m always skeptical of claims that rely on networks running high bandwidth applications, but Liqid has taken a smart approach.

What is composable infrastructure?
This is a concept that Intel has been pushing with their Rack Scale Design concept, and that HPE has productized with Synergy. The idea is build high density racks of computes, networks and storage and use software to create virtual servers that have whatever the application needs for optimum performance.

Like physical servers, these virtual servers can run VMware or Docker. The difference is that if you need a lot of network bandwidth, you can get it without opening a box.

Or if a virtual server dies, move its boot device to a new one. That’s flexible.

The payoff
Current server utilization ranges from 15-35%. Double that and the payback is almost instant.

Eventually, with an API, there’s no reason an application couldn’t request additional resources as needed. Real time server reconfiguration on the fly.

PCIe switch
The hardware part of Liqid makes what they do possible, even though they are not wedded to being a hardware company. They built a top-of-rack PCIe, non-blocking, PCIe switch, using a switch chip from PLX, now owned by Avago. (StorageMojo mentioned PLX in a piece on DSSD 3 years ago.)

The switch contains a Xeon processor that runs Liqid’s software. That’s right, there are no drivers to install on the servers. The switches have 24 PCIe ports in a half-rack box, so you can have a dual-redundant 48 port switch in 1U.

Performance
In the IOPS abundant world of flash storage, latency is now the key performance metric. And Liqid says their switch latency is 150ns. Take any local PCIe I/O, run it through the switch, and add only 150ns of latency.

Then there’s bandwidth. This is a Gen3 PCIe switch with up to 96GB/sec of bandwidth. Liqid has several reference designs that offer scale out and scale up options.

The StorageMojo take
The oddest part of Liqid’s business is the switch. Why haven’t Cisco and Brocade built PCIe switches? There’s been a collective blindspot in Silicon Valley around PCIe as a scalable interconnect. (Likewise with Thunderbolt, but that’s another blindspot story.)

But the important thing is that Liqid – and HPE – has caught a wave. PCIe’s ubiquity – everything plugs into PCIe – low latency and high bandwidth, make it the do-everything fabric. And yes, you can run PCIe over copper and glass, the latter for 100m+ distances.

Intel has updated their RSD spec to include PCIe fabric as well. If you want to get a jump on the Next Big Thing, check out Liqid and start thinking about how it can make your datacenter more efficient.

Courteous comments welcome, of course.

{ 1 comment… read it below or add one }

Mark May 11, 2017 at 6:06 am

To quote Yogi Berra, Liqid seems like deja vu all over again. Bare-metal server I/O virtualization with provisioning. First with Egenera and PAN Manager, then with InfiniBand based I/O virtualization with Topspin/VFrame and Xsigo Systems.

The idea of using PCIe switching to virtualize I/O is also not new. PCIe for I/O sharing and virtualization was planned for Sun’s blade servers a decade ago (Sun never delivered a PCIe switch, but did deliver a shared MR-IOV NIC), and I remember seeing PCIe switching based I/O virtualization at VMworld about a decade ago.

There was even another “Liquid Computing” start-up back then, called “liQuid Computing” with their LiquidIQ software. They originally created a SMP implementation on top of AMD’s HyperTransport, but later developed an I/O virtualization and provisioning solution.

What has changes now is the possibility of native PCIe storage access using NVMe.

I’m not sure similar approaches which failed in the past will work this time. Ultimately customers seek specific outcomes, and if a customer can get a similar outcome with more mainstream tools, they will not need esoteric solutions from a start-up.

Leave a Comment

Previous post:

Next post: