The technology wheel is turning again. Yesterday it was converged and hyperconverged infrastructure. Tomorrow it’s composable infrastructure.

Check out Liqid a software-and-some-hardware company that I met at NAB. The software – Element – enables you to configure custom servers from hardware pools of compute, network, and, of course, storage.

I met Liqid co-founder Sumit Puri at NAB 2017, and had a concall with him and Jay Breakstone, co-founder and CEO, last week. From long experience I’m always skeptical of claims that rely on networks running high bandwidth applications, but Liqid has taken a smart approach.

What is composable infrastructure?
This is a concept that Intel has been pushing with their Rack Scale Design concept, and that HPE has productized with Synergy. The idea is build high density racks of computes, networks and storage and use software to create virtual servers that have whatever the application needs for optimum performance.

Like physical servers, these virtual servers can run VMware or Docker. The difference is that if you need a lot of network bandwidth, you can get it without opening a box.

Or if a virtual server dies, move its boot device to a new one. That’s flexible.

The payoff
Current server utilization ranges from 15-35%. Double that and the payback is almost instant.

Eventually, with an API, there’s no reason an application couldn’t request additional resources as needed. Real time server reconfiguration on the fly.

PCIe switch
The hardware part of Liqid makes what they do possible, even though they are not wedded to being a hardware company. They built a top-of-rack PCIe, non-blocking, PCIe switch, using a switch chip from PLX, now owned by Avago. (StorageMojo mentioned PLX in a piece on DSSD 3 years ago.)

The switch contains a Xeon processor that runs Liqid’s software. That’s right, there are no drivers to install on the servers. The switches have 24 PCIe ports in a half-rack box, so you can have a dual-redundant 48 port switch in 1U.

Performance
In the IOPS abundant world of flash storage, latency is now the key performance metric. And Liqid says their switch latency is 150ns. Take any local PCIe I/O, run it through the switch, and add only 150ns of latency.

Then there’s bandwidth. This is a Gen3 PCIe switch with up to 96GB/sec of bandwidth. Liqid has several reference designs that offer scale out and scale up options.

The StorageMojo take
The oddest part of Liqid’s business is the switch. Why haven’t Cisco and Brocade built PCIe switches? There’s been a collective blindspot in Silicon Valley around PCIe as a scalable interconnect. (Likewise with Thunderbolt, but that’s another blindspot story.)

But the important thing is that Liqid – and HPE – has caught a wave. PCIe’s ubiquity – everything plugs into PCIe – low latency and high bandwidth, make it the do-everything fabric. And yes, you can run PCIe over copper and glass, the latter for 100m+ distances.

Intel has updated their RSD spec to include PCIe fabric as well. If you want to get a jump on the Next Big Thing, check out Liqid and start thinking about how it can make your datacenter more efficient.

Courteous comments welcome, of course.