Liqid’s composable infrastructure

by Robin Harris on Monday, 8 May, 2017

The technology wheel is turning again. Yesterday it was converged and hyperconverged infrastructure. Tomorrow it’s composable infrastructure.

Check out Liqid a software-and-some-hardware company that I met at NAB. The software – Element – enables you to configure custom servers from hardware pools of compute, network, and, of course, storage.

I met Liqid co-founder Sumit Puri at NAB 2017, and had a concall with him and Jay Breakstone, co-founder and CEO, last week. From long experience I’m always skeptical of claims that rely on networks running high bandwidth applications, but Liqid has taken a smart approach.

What is composable infrastructure?
This is a concept that Intel has been pushing with their Rack Scale Design concept, and that HPE has productized with Synergy. The idea is build high density racks of computes, networks and storage and use software to create virtual servers that have whatever the application needs for optimum performance.

Like physical servers, these virtual servers can run VMware or Docker. The difference is that if you need a lot of network bandwidth, you can get it without opening a box.

Or if a virtual server dies, move its boot device to a new one. That’s flexible.

The payoff
Current server utilization ranges from 15-35%. Double that and the payback is almost instant.

Eventually, with an API, there’s no reason an application couldn’t request additional resources as needed. Real time server reconfiguration on the fly.

PCIe switch
The hardware part of Liqid makes what they do possible, even though they are not wedded to being a hardware company. They built a top-of-rack PCIe, non-blocking, PCIe switch, using a switch chip from PLX, now owned by Avago. (StorageMojo mentioned PLX in a piece on DSSD 3 years ago.)

The switch contains a Xeon processor that runs Liqid’s software. That’s right, there are no drivers to install on the servers. The switches have 24 PCIe ports in a half-rack box, so you can have a dual-redundant 48 port switch in 1U.

Performance
In the IOPS abundant world of flash storage, latency is now the key performance metric. And Liqid says their switch latency is 150ns. Take any local PCIe I/O, run it through the switch, and add only 150ns of latency.

Then there’s bandwidth. This is a Gen3 PCIe switch with up to 96GB/sec of bandwidth. Liqid has several reference designs that offer scale out and scale up options.

The StorageMojo take
The oddest part of Liqid’s business is the switch. Why haven’t Cisco and Brocade built PCIe switches? There’s been a collective blindspot in Silicon Valley around PCIe as a scalable interconnect. (Likewise with Thunderbolt, but that’s another blindspot story.)

But the important thing is that Liqid – and HPE – has caught a wave. PCIe’s ubiquity – everything plugs into PCIe – low latency and high bandwidth, make it the do-everything fabric. And yes, you can run PCIe over copper and glass, the latter for 100m+ distances.

Intel has updated their RSD spec to include PCIe fabric as well. If you want to get a jump on the Next Big Thing, check out Liqid and start thinking about how it can make your datacenter more efficient.

Courteous comments welcome, of course.

{ 1 comment }

NAB 2017 storage roundup

by Robin Harris on Thursday, 4 May, 2017

Spent two days at the annual National Association of Broadcasters (NAB) confab in Las Vegas. With 4k video everywhere, storage was a hot topic as well. Here’s what caught my eye.

Object storage – often optimized for large files – continues to be a growth area. Scality, Dynamic Data Pool, Object Matrix, HGST, Data IO, OpenIO, and more were out. Typically, object stores offer lower costs than cloud vendors, excellent availability, data integrity, and easy scalability.

AWS, Microsoft, IBM and Google were all touting their cloud services for media producers and distributors. The killer app for cloud storage – at least in media and entertainment – is not storage alone, but collaboration and sharing. It’s common today for creative contributors to come not from Europe and Asia as well as north and south America.

Feature length movies can reach a million gigabytes. Secure sharing is a requirement that cloud vendors are well positioned to optimize.

Thunderbolt storage has gotten a shot in the arm from the adoption of Thunderbolt 3.

Atto was demoing a Thunderbolt controller that achieved over 2700MB/sec throughput to a single storage array.

Accusys was back with their Thunderbolt A12T3 sharable desktop storage. It supports up to 8 nodes for a low-cost shared infrastructure.

Symply hit some potholes on their way to a shared Thunderbolt storage system, but says they’re back on track. One issue: 40Gb/s signal integrity is not easy and, of course, absolutely essential.

Device vendors WD and Seagate were there to tout their system products. Seagate’s acquisitions of array vendor Dot Hill and chassis specialist Xyratex – an IBM spin off – enable them to offer reliable, high density storage and compute platforms. I’ve already mentioned the WD/HGST object storage platform.

Drones
Drones shooting 4k video are also hot. DJI, the market leader – you can pick up former competitor 3DR’s Solo Drone at fire sale prices – introduced a 100 megapixel drone with a Hasselblad camera for the pro market.

The Hasselblad shoots 4k video in addition to producing 230MB RAW image files. Hollywood crane rentals are about to take a hit.

Render acceleration
The problem of producing special effects quickly – whether for media, VR, or AEC – got a lot of attention. Some vendors were showing realtime rendering of special effects.

Silverdraft, for example, has a personal rendering machine and a half-rack video supercomputer. Content producers with Silverdraft and the right software can play with effects in real time, rather than making a change and waiting for a render to see the difference.

VR
Lots of VR demos. But as I pointed out on ZDNet,

While 4k content requires 4x the storage capacity of 2k (1080) content, it is storage bandwidth that will force costly system upgrades.

Dual 4k displays at 90FPS require about 10GB/sec of storage bandwidth. So even if realtime renders are consumerized, the storage bandwidth will be the long pole driving cost.

The StorageMojo take
It’s clear that unlike the moribund 3D push, 4k is a real trend with long term implications for the storage industry. Like what?

4k’s large file sizes will energize the object storage market, as long as it has plenty of bandwidth to go along with low cost capacity.

4K – with 6k and 8k as well – production also stresses portable systems and storage for onsite replication, rough cuts, dailies, and technical checks.

Archives will also have to grow to handle 4k.

Bottom line: 4k will be a bonanza for the storage industry. Add to that the decreasing costs of other production inputs, and its clear that the impact of video on how we produce, share, and consume our stories will only accelerate.

Courteous comments welcome, of course. I’ve done work for WD and HGST.

{ 0 comments }

Is NetApp still doomed?

by Robin Harris on Thursday, 20 April, 2017

A reader wrote to ask for the StorageMojo take on NetApp now, as opposed to the assessment in How doomed is NetApp? two years ago.

Q3 had some good news for NetApp. In their latest 10Q filing, they noted that while revenues for the first 9 months of the year were down 3%, for the latest quarter they were up 1% year over year. Why?

The increase in the third quarter of fiscal 2017 compared to the third quarter of fiscal 2016 was primarily due to higher product revenue, driven by the increase in revenues from strategic products more than offsetting the decrease in mature products. The increase in product revenues was partially offset by lower hardware maintenance and other services revenue. The decrease in the first nine months of fiscal 2017 compared to the first nine months of fiscal 2016 was primarily due to lower product revenue and lower hardware maintenance and other services revenue.

In other words, SolidFire and Clustered ONTAP systems are making a difference, but now NetApp is being dragged down by non-renewed maintenance contracts and other service revenue shortfalls – especially for high end systems – as well as the continued erosion of the FAS line. Gross margins dropped a percentage point, which I suspect was do to more aggressive discounting to win competitive business. Without that, Q3 may not have eked out even 1% product growth.

The people problem
People are the most expensive part of NetApp’s cost structure – as it is for most businesses – and NetApp has laid off about 2500 people – ≈20% – in the last couple of years. The cuts have been strongest in sales and marketing, but R&D has also taken a significant hit dropping from 16% of net revenues in the 9 months ending January 2016, to 13% for the three months ending January 2017.

If you assume that the laid off people were somewhat productive, you’d expect to see a hit somewhere, and indeed we do.

In sales and marketing expenses, NetApp reports that

Compensation costs for the third quarter and the first nine months of fiscal 2017 compared to the corresponding periods in the prior year were favorably impacted by lower salary and stock-based compensation expenses due to a 12% decrease in average headcount, but were unfavorably impacted by higher incentive compensation expense.

Translation: the remaining people wanted more money to do the extra work we thrust on them.

Same story with R&D, which took a 19% headcount cut:

Compensation costs for the third quarter and first nine months of fiscal 2017 . . . were unfavorably impacted by higher incentive compensation expense.

R&D was also “favorably” impacted by

. . . lower spending on materials and services associated with engineering activities to develop new product lines and enhancements to existing products due to the completion of certain key development projects.

Reduced spending on new product development? What could possibly go wrong?

The StorageMojo take
NetApp’s strategic products – i.e. the ones customers want – are growing at a healthy rate and constitute a majority of their product revenues. So they’ve come back from the dead, right?

Not really. Slowed the decline is more accurate. The fundamentals haven’t changed, even with SolidFire and Clustered ONTAP.

NetApp has a good sized installed base, and can be profitable for years to come servicing them. But it doesn’t look like that installed base is expanding, and in a growing industry, that means decline.

Cutting sales and marketing and R&D are easy ways to pump up short term results. But they won’t build the company for the long term or reverse secular decline.

What NetApp has excelled at is the financial engineering required to keep the stock price high. Throughout all the turmoil of the last few years, their gross margins have remained in the low 60’s. So Wall Street is happy. In fact, the stock is trading close to its highest value of the last 5 years.

Would they have been wiser to trade margin dollars for protecting the installed base and growing new accounts? Probably. But the stock price wouldn’t be where it is today. And for execs with big options packages, that’s the bottom line.

Courteous comments welcome, of course.

{ 2 comments }

Spin Transfer Technologies: next up in the MRAM race

by Robin Harris on Wednesday, 19 April, 2017

MRAM technology is hot. I’ve written about Everspin – they’ve been shipping for years and just IPO’d – and now I’d like to introduce Spin Transfer Technologies. They’ve kept a low profile – they AREN’T shipping, are sampling protos, and they do have some nice Powerpoints. I spoke to their CEO, Barry Hoberman, and the VP Engineering, Bob Warren, last week.

Context
STT was founded in 2012, based on IP that NYU incubated for years. Their primary funder is the IP commercialization firm Allied Minds. (IP commercialization is a newish business model too.) Here’s a word from Allied Minds:

Allied Minds is a private equity-funded innovation company that forms, funds, manages and builds startups based on early-stage technology developed at renowned U.S. universities and national labs. Allied Minds serve as a diversified holding company that supports its businesses with capital, management and shared services and is the premier firm to utilize this novel and fully-integrated approach to technology commercialization.

STT has raised $109 million and is embarked on an 8 year development plan. Their secret sauce is

. . . a proprietary OST-MRAM™ (orthogonal spin transfer MRAM) that offers a much higher speed-power-endurance operating point than other conventional perpendicular ST-MRAM technologies. The result: MRAM operating speeds matching those of SRAM cache memories — but without the endurance and data retention limitations or excessive power consumption of other MRAM implementations.

A long game
A company that started in 2012 with an 8 year development program is planning to ship in 2020. They have developed working devices that they are sampling so potential customers can get an idea of what their technology can do.

Which is? The first market for OST-MRAM is as an SRAM replacement. Their big advantage there is density: an MRAM cell is 6-10x smaller than an SRAM cell. But SRAM is also fast, so STT is working to get their parts down to a 10ns cycle time. Not easy. Current MRAM is in the ≈30ns range.

STT also believes they have other key advantages over other MRAM and ReRAM technologies as well.

One is a better write error rate. According to STT:

. . . setting a magnetic polarization vector is a probabilistic event—writing an MRAM cell one trillion times will mostly work, but very occasionally may not. One of the biggest challenges with MRAM technology is making the write error rate (WER) as low as possible, and also compensating for the few errors that will occur.

STT believes they have a solid answer to this problem, which will make them competitive.

Another issue is the tension between write speed, power consumption, and endurance. Fast writes require more power, and more power reduces cell endurance. So technology that reduces WER, also helps write speed, power, and endurance.

The StorageMojo take
One of the points STT makes is that magnetic tunnel junctions do not require atoms to move, unlike ReRAM and PCM, and that MTJ technology is already widely used in disk drive heads. So maybe the HDD folks have the inside track.

STT also thinks they have a shot at replacing DRAM. The claim is that there are just 2-3 more processing steps over CMOS to make MRAM, so the costs should be competitive, while lower power consumption will seal the deal.

But the early market will be IoT and SRAM replacement. They aren’t the only ones targeting IoT, but the concept is certainly viable.

Bottom line: the NVRAM market is heating up. And that’s a very good thing for the IT industry.

Courteous comments welcome, of course.

{ 2 comments }

Sizing the overconfig effect on the array market

by Robin Harris on Thursday, 30 March, 2017

For decades customers routinely overconfigured storage arrays to get performance. Customers bought the most costly hard drives – 15k SAS or FC – at huge markups. Then they’d short stroke the already limited capacity of these high cost drives – turning a 900GB drive into a, say, 300GB drive – in order to goose IOPS and throughput even further.

Then, of course, they put those costly and power-hungry drives behind large, heavily cached, dual controllers, whose DRAM was even more expensive and power-hungry. These were the best decisions customers could make at the time, but they distorted the array market in a couple of ways.

First, average enterprise capacity utilization was stuck at around 30% for years, meaning customers were buying 3x the capacity they used, and very expensive capacity at that. Then, the storage industry grew fat on the 60%+ gross margins that these systems commanded.

Then SSDs & cloud took the punch bowl away
Thus the array market was smaller in terms of capacity demand – and revenue – than it appeared, if it could have delivered the performance customers wanted without the cost. But how much bigger?

Two major headwinds are buffeting the array industry: cloud services; and all flash arrays (AFA). While the cloud business is clearly taking revenue away from array vendors, the ready IOPS of SSDs are also crushing the traditional big iron array market in both all flash arrays and hybrid arrays, the latter being about 4x the revenue of AFA.

The back of the envelope, please
To untangle the effects on the industry, I took a 2011 IDC forecast for 2015 – they expected 3.9% CAGR – added another year of 3.9% growth to get us into 2016, and arrived at a (2011 era) WW enterprise storage market forecast of $38.75 billion.

Then I added up IDC’s 2016 actuals – $34.6B – which is 89% of $38.75B. A $4.15B shortfall.

But that’s not all. Of that $34.6B, about $4B is all flash arrays, which we can assume are mostly displacing high-end arrays.

So the total impact on the legacy array market is on the order of $8B, or over 20%. That’s how much the overconfiguration effect was costing customers – and inflating array revenue – and is now costing vendors as their business shrinks.

The StorageMojo take
Yeah, this is a loosy-goosy estimate. It mixes up cloud and SSDs impacts. It leaves out the fact that AFAs cost more per GB than HDD arrays. But all in all, it is a conservative estimate.

Why? Look at Infinidat. They’ve built a modern high-end array, using disk and flash, and it costs about a third of a traditional dual-redundant, big iron array. And it’s triple redundant, all in software, and using commodity hardware – like modern storage.

Infinidat’s strategy – kick ’em when they’re down – is almost as good as their architecture. But without cloud and flash, the Big Iron market would be growing even faster than IDC predicted.

Which is to say that IDC’s 2011 forecast was too conservative. If Big Data didn’t have the cloud and scale out storage to live on, we’d have Not-So-Big Data, but it still would have propelled capacity growth – and the array market would have been even larger than IDC forecast.

Courteous comments welcome, of course. Got a different take? Please share in the comments.

{ 3 comments }

Hike blogging: Deadmans Pass panorama

by Robin Harris on Saturday, 25 March, 2017

I’ve been hiking a lot the last couple of weeks, getting in shape after a long hiatus. Today I took a loop that I’ve never done counterclockwise and even though it shouldn’t have made much difference, it was a much more enjoyable hike.

The loop began with an easy walk of about a mile on the Long Canyon trail, until it meets Deadmans Pass trail. From there it is about another mile to another favorite, Mescal trail, with beautiful views of the lower end of Boynton Canyon and, later, to the rocks to the south.

About halfway through the cloudy sky gave way to partly cloudy, allowing the sun to light up the rocks. Here’s a panorama I took from Deadmans Pass trail looking from the west to the north. Enjoy!

Click to enlarge.

The panorama is extra large – 4000 pixels wide – so there’s a lot to look at. If you look carefully you can see a couple of buildings of the Enchantment resort that is nestled into the mouth of Boynton Canyon.

The StorageMojo take
Being spring break, lots of people were enjoying the trail, both hikers and bikers. I’m glad they did, because more rain is expected this afternoon.

{ 0 comments }

DSSD’s demise

March 22, 2017

A couple of weeks ago Dell EMC announced the demise of the once promising DSSD all flash array. They are planning to incorporate DSSD technology into their other products. As StorageMojo noted 4 years ago, DSSD developed a lot of great technology. But for whatever reason – perhaps turmoil associated with Dell’s purchase of EMC? […]

1 comment Read the full article →

Avere closes new round – with a twist

March 21, 2017

Avere announced this afternoon that they’ve closed a Series E round of $14 million, bringing their total funding to a cool $97 million. Existing investors Menlo Ventures, Norwest Venture Partners, Lightspeed Venture Partners, Tenaya Capital and Western Digital Capital all re-upped, always a good sign. But the twist? Google Inc. joined the round. The StorageMojo […]

0 comments Read the full article →

HP offers to buy Nimble Storage

March 7, 2017

HPE’s has offered to buy Nimble Storage for $1.09B. The StorageMojo take This is a good move for both companies. HPE has the enterprise footprint that Nimble was spending big to build, and Nimble has an advanced and forward looking storage platform that will bring new ideas into HPE’s enterprise storage group, as well as […]

0 comments Read the full article →

fsck interruptus and your data

March 2, 2017

Today is the last day of FAST 17. Yesterday a couple of hours were devoted to Work-in-Progress (WIP) reports. WIP reports are kept to 4 minutes and a few slides. One in particular caught my eye. In On Fault Resilience of File System Checkers, Om Rameshwar Gatla and Mai Zheng, posed an interesting question: how […]

2 comments Read the full article →

StorageMojo’s Best Paper of FAST 2017

March 1, 2017

StorageMojo’s crack analyst team is attending the Usenix File and Storage Technology (FAST) ’17 conference. As usual, there is lots of great content. But only one paper – this year – gets the StorageMojo Best Paper nod. The conference awarded two Best Paper honors as well – so you have plenty of reading to catch […]

1 comment Read the full article →

It’s simple arithmetic: why Trump’s immigration stance is 100% wrong for America

February 25, 2017

An editorial comment This isn’t complicated. America has 325 million people, out of the world’s 7.4 billion, or about 4.4% of world’s population. Despite the numerical imbalance, America also has the world’s largest economy by most measures. There are two countries, China and India, which are several times the population of the United States. China’s […]

11 comments Read the full article →

The coming all flash array/NVMePCIe SSD dogfight

February 23, 2017

In this morning’s post on ZDNet on the diseconomies of flash sharing I discuss the fact that many NVMe/PCIe SSDs are as fast as most all flash arrays (AFA). What does that mean for the all flash array market? Short answer: not good Today a Dell PowerEdge Express Flash NVMe Performance PCIe SSD – ok, […]

9 comments Read the full article →

Why doesn’t storage innovation come from the storage industry?

February 20, 2017

For all the time and effort poured into the storage market over the last 20 years, surprisingly little innovation has come from storage vendors themselves. Why is that? Hall of shame EMC got its opening when IBM whiffed on the storage array business. IBM had no excuse, as the hard work had been done at […]

7 comments Read the full article →

Cloud integration now mandatory for storage

February 17, 2017

Spoke to the fine folks at Cloudtenna. Their thing: Cloudtenna is the first platform to generate augmented intelligence on top of your existing file repositories. The Direct Content Intelligence (DirectCI) agent uses deep machine learning to identify the files most relevant to each individual user, ushering in a new era of secure intelligent search, file […]

0 comments Read the full article →