The coming all flash array/NVMePCIe SSD dogfight

by Robin Harris on Thursday, 23 February, 2017

In this morning’s post on ZDNet on the diseconomies of flash sharing I discuss the fact that many NVMe/PCIe SSDs are as fast as most all flash arrays (AFA). What does that mean for the all flash array market?

Short answer: not good
Today a Dell PowerEdge Express Flash NVMe Performance PCIe SSD – ok, is spec’d to – offer ≈700,000 IOPS, with gigabytes per second of bandwidth. That’s in the range of many AFAs. The NVMe/PCIe SSD does all that for thousands of dollars, not $100k or more. And you can order one from NewEgg or Dell.

There are two obvious problems with the idea that NVMe/PCIe SSDs can take a major piece of the AFA market.

  • Services. AFAs offer many services that enable managing and sharing the storage. NVMe/PCIe SSDs are drives, leaving the management up to you.
  • Sharing. Put an AFA on a SAN and you have a shared resource. Any PCIe device is marooned in its host server.

But if hyper scale datacenters have taught us anything, it is that shared nothing clusters can offer many services and share hardware. All it takes is an appropriate software layer and lots and lots of network bandwidth.

With the rapid advent of 25 Gb/sec and faster Ethernet, the bandwidth issue is manageable. That leaves the software.

Given the size of the market opportunity, the software should arrive soon.

The StorageMojo take
AFAs and their more cost-effective hybrid brethren aren’t disappearing. There will always be applications where they will make sense, and a cadre of people who just don’t like NVMe/PCIe SSDs for enterprise work.

But I think this will be a hot area of contention, since most of the SSD vendors don’t make AFAs. They have little to lose by pushing NVMe/PCIe SSDs for broad adoption.

But it will mark the beginning of the end for array controllers as service platforms. Why rely on a doubly redundant array controller when you can rely on a virtually immortal cluster to host services?

This is going to be fun to watch.

Courteous comments welcome, of course.

{ 5 comments }

Why doesn’t storage innovation come from the storage industry?

by Robin Harris on Monday, 20 February, 2017

For all the time and effort poured into the storage market over the last 20 years, surprisingly little innovation has come from storage vendors themselves. Why is that?

Hall of shame
EMC got its opening when IBM whiffed on the storage array business. IBM had no excuse, as the hard work had been done at Berkeley by Patterson, Gibson, and Katz, in their seminal 1988 RAID paper. Despite the clear guidance offered by the paper, it was a tiny minicomputer add-on memory vendor – EMC – that went big on RAID, giving IBM a thorough shellacking on a market it had owned for decades.

Back in the 90s most of the “advances” in arrays came from the rapid areal density increases in disk drives. Array vendors could tout larger array capacities at lower $/GB, without doing anything more than qualifying a new drive. What more could customers want?

Sun totally blew the filer market, after inventing NFS and having a nice line of servers to run it on. It took NetApp to bring file servers to enterprise respectability.

Sun also blew the chance to own a major chunk of the SAN market, despite being first to market with an all-fibre channel array, as management insisted on FC hubs, not FC switches. Brocade, a startup, did pretty well instead.

Google developed their own scale-out object storage system whose basic architecture has been the fastest growing part of the storage market for over a decade. Where was EMC or NetApp?

The PCIe flash drive was developed by folks from the supercomputer world. Fusion-io, now rolled into WD, brought high-performance internal storage to servers, starting the revolution that has caused storage to migrate into servers instead of away from them. That’s part of what forced EMC to sell itself to Dell.

What’s the problem?
In the Innovator’s dilemma Clayton Christensen theorized that the initial small size of disruptive opportunities made them unattractive to large companies that needed large revenue increments to sustain growth. And, of course, people are notoriously poor at estimating exponential growth rates, such as hyper-scale data centers and internal storage.

So not only does the initial opportunity look dubious, management didn’t properly estimate the growth potential of the new paradigms. Oops!

But there’s a larger problem with the storage industry. Vendors develop a certain mindset and simply don’t believe that their conservative storage customers are going to embrace a new and untried technology – until they do. Then it’s too late.

Instead, vendors focus on creating loopy justifications for selling more of what they already have – Information Lifecycle Management, anyone? – rather than looking beyond the past to create the future. It is left to non-storage people with a broader view – and ignored needs – to see how storage can help them get things done.

The StorageMojo take
Storage vendors are good at faster, better, cheaper. But they seem to be the last to know what the Next Big Thing is.

That’s partly because storage, as the only persistent part of today’s systems, is really hard. The corner cases can be excruciatingly complex. Many of the common assumptions underlying RAID, for example, turned out to be false in practice, which led to ever more complex and costly work arounds.

That, in turn, leads to a kind of techno-blindness, where the details of the old technology are so enthralling that the new technology doesn’t get a serious evaluation. Similarly, financial-blindness – “business is good, margins are great, why rock the boat?” – is an issue for financial management.

Customer conservatism is also a factor. Customers hate getting bit by storage failures, so when they find something that works, they tend to stay with it even if it is far from the optimal solution. That creates an echo chamber for vendors who don’t really want to hear dissenting views.

There is a new paradigm about to hit the industry, which will eviscerate large portions of the current storage ecosystem. Like other major shifts, it is powered by a class of users who are poorly served by existing products and technologies. But if our digital civilization is to survive and prosper, it has to happen. And it will, like it or not.

Courteous comments welcome, of course. I’ve done work for WD.

{ 4 comments }

Cloud integration now mandatory for storage

by Robin Harris on Friday, 17 February, 2017

Spoke to the fine folks at Cloudtenna. Their thing:

Cloudtenna is the first platform to generate augmented intelligence on top of your existing file repositories. The Direct Content Intelligence (DirectCI) agent uses deep machine learning to identify the files most relevant to each individual user, ushering in a new era of secure intelligent search, file sharing, and communications solutions for modern businesses.

Actually, they do way more than that, enabling much more intelligent and flexible use of cloud resources. For example, they also provide auditing capabilities that look at file usage, a hot area for a number of vendors.

Cloudtenna has just cut agreements with NetApp and Nutanix to enhance their cloud integration. That a tired legacy vendor and a hot new vendor both signed with Cloudtenna shows the value of their solution.

But it shows something else: cloud integration is now a customer requirement. Table stakes, if you will.

The StorageMojo take
This is what the storage industry has come to: embracing the enemy that is ripping the guts out of your business – at least in the case of NetApp and other legacy vendors. Since the Nutanix technology model is closely aligned with cloud cost structures, they can be magnanimous.

For decades, storage was the vendor cash cow that just kept on giving. And a lot of folks still think that way. NetApp’s gross margins haven’t budged from 60%+ even while they’ve laid off thousands.

Local storage still has a latency and (perhaps) a security advantage over the cloud. But it’s pretty obvious those advantages are shrinking. So what’s a NetApp to do? At best, a long, slow glide into oblivion.

Courteous comments welcome, of course.

{ 0 comments }

Non-Volatile Memory Workshop ’17

by Robin Harris on Wednesday, 15 February, 2017

StorageMojo’s crack analyst team will be attending the 8th Non-volatile Memory Workshop. Last year’s event attracted 230 participants.

This isn’t the Flash Memory Summit, which focuses on flash memory as a storage technology and its commercial application. NVMW is an academic conference, so you won’t see many polished corporate presentations.

Their mission statement:

The workshop will bring together scientists and engineers from industry and academia who are working on advanced non-volatile storage devices and systems. The goal is to facilitate the exchange of ideas, insights, and knowledge within this broad community of practitioners and researchers, and to foster the establishment of new collaborations that can propel future progress in the design and application of non-volatile memories.

The StorageMojo take
There is no doubt that NVM will drive multiple disruptions in the storage market over the next decade, with an aggregate impact greater than what NAND flash has done over the last decade. If it’s your job to be on the leading edge of enabling technologies, NVMW 17 is a place you’ll want to be.

Courteous comments welcome, of course.

{ 0 comments }

FAST ’17 starts in two weeks

by Robin Harris on Monday, 13 February, 2017

FAST 17 starts in two weeks. AFAIK, StorageMojo was the first press to attend and report on FAST, starting in 2008.

For years I had it to myself, but in the last few years other publications have started attending. That’s a very good thing, as storage is THE problem of a digital civilization. The more focus on storage, the more likely we are to have a successful long-term digital society.

The StorageMojo take
I wasn’t able to make it last year. So I expect to see a lot of new faces and tech.

If you’d like to chat, feel free to say hello. I’m pleased to be returning!

Courteous comments welcome, of course.

{ 3 comments }

Why Amazon won’t be the IBM of cloud

by Robin Harris on Wednesday, 1 February, 2017

IBM was the driving force in the computer industry beginning with the advent of the IBM 360 mainframe family. Their big idea was to build a family of computer systems that all ran the same software and, generally, used the same peripherals.

The IBM 360 was a brilliant idea and a massive success, making IBM the growth stock for a decade. It also made IBM the giant of the industry, with a 70-80% market share.

What happened to AWS?
Three years ago it looked like AWS was set to be the IBM of cloud, with, by some estimates an 80% market share. AWS is still the clear leader, but the competition has made major inroads, and will continue to chip away at their lead.

Why?
Tl;dr: Different inputs; different outputs.

Longer version –

  • Competition. Early on, IBM’s competition were small firms started by techies who had no idea how to sell to the large companies that were IBM’s bread and butter. Later, some of these firms were bought by larger companies, like NCR and Remington Rand, that were limited by internal politics and/or their particular customer base.
  • Distribution. In the 50s, 60s, and 70s, direct sales was about the only channel, and then, as now, it was expensive, requiring massive investments in people and local infrastructure. IBM had that already, and they leveraged it well.
  • Support. As the excellent movie Hidden Figures showed, early computers were complex, cranky, and a world away from the calculators and tabulating machines that dominated business processing. IBM had its problems with the 360 rollout, but they were expert at keeping customers in the fold.
  • Investment. IBM went all in with the 360 project, investing the equivalent of $5 billion in today’s dollars, building new factories, writing new software, and, of course, designing new hardware. IBM – and their customers – endured massive pain in the process, but, paradoxically, the pain united customers to IBM – Stockholm syndrome – and gave IBM a giant leap up the learning curve.
  • Software. IBM was plowing virgin ground, but today the prevalence of open source software makes it hard to build a sustainable and significant differentiator. AWS is beavering away to create software lock-in, but the pace of change in software – where were containers five years ago? – means today’s lock-in is tomorrow’s old news.

It’s a new world
AWS faces a very different world than IBM did in 1960. Its competitors are large, profitable, and, most importantly, well differentiated from AWS.

  • Microsoft is working off its Windows base, leveraging the Microsoft Research brain trust, and its massive financial clout. AWS hires bright people too, but MR has a much deeper bench.
  • IBM is leveraging its long term relationships with enterprises to take the lead in private cloud management and support. IBM also has a significant research arm, and it looks like their endemic “suits vs geeks” warfare has been tamped down in the cloud efforts.
  • Google, is, surprisingly, the weakest player in this group, which underscores just how tough the competition is. Under Eric Schmidt Google whiffed the cloud market, but Larry Page seems serious about making up for lost time. Google is the least in tune with customers, but has strong roots with developers and, like Amazon, their own massive and profitable infrastructure to build on. They might even figure out how to leverage their Android base.

Dark horses
HPE seems to have dropped out of the race, but Dell/EMC might figure out something that augments their installed base, much as what NetApp is attempting. Slim chance, but that’s why they’re a dark horse.

Despite all its misfires online, Apple might get its act together and build something great on its iOS base. Yes, slim chance, but they’ve got money and, maybe, vision. Execution is their problem in this space.

Facebook. They’ve got the scale and the money to compete and become the Everyman’s cloud infrastructure, taking a piece of Microsoft and Apple.

The Storagemojo take
In five years the specter of AWS cloud dominance will be a distant memory. The potential cloud market is enormous and we are, in effect, where the computer industry was in 1965. AWS will be successful, just not dominant. No tears for AWS.

Also, we should remember the downsides of IBM’s dominance. They fought interactive computing and peer-to-peer networking. And while they invented the disk drive, they also worked hard to keep customers locked into proprietary interfaces, impeding the development of a robust storage industry, until they took their eye off the ball.

It’s good for the industry and customers that there are four powerful cloud competitors, as well as tempting private cloud options. Expect the rapid development of the cloud market to continue apace, with benefits for all consumers – and challenges for the competitors.

Courteous comments welcome, of course.

{ 3 comments }

Hike blogging: Tavasci Marsh

January 30, 2017

The Verde Valley is green because it has water. The Verde River is Arizona’s 2nd longest river, at 170 miles (the Little Colorado is longer) and in my neck of the woods flows through the towns of Clarkdale, Cottonwood, and Camp Verde. Yesterday I took my first hike in months, so I chose an easy […]

0 comments Read the full article →

Scality reimagines storage as art

January 11, 2017

The fine folks at Scality send out a new year book of photos and – of course – promos. This year caught my attention because, as a fan of modern art, especially those with Cadillacs, they gen’d up a photo of a disk drive displayed like one of the famous Cadillac Ranch cars. Here’s an […]

5 comments Read the full article →

Violin’s bankruptcy

January 6, 2017

Violin Memory, one of the early entrants with an all-flash array, filed for bankruptcy last month. The company continues to operate under Chapter 11, but this is a sad outcome for a pioneer. So much for first mover advantage When I first met with Violin, the original team had a great idea and not much […]

2 comments Read the full article →

Thunder Mountain

December 26, 2016

StorageMojo’s hike blogging has been on hiatus for a few months, due to a personal issue. But no worries! If all goes according to plan I will be better than new by the end of January. I’m more than ready! I’ve had to skip too many industry events, such as Flash Memory Summit and the […]

2 comments Read the full article →

Purpose built backup appliances: cloud collateral damage

December 22, 2016

It makes sense that the WW purpose-built backup appliance would be suffering. Cloud-based data gets IaaS provider DR, while cloud backup software handles day-to-day backup, and modern object storage systems optimize archiving. Back in April of 2012, IDC produced a PBBA market analysis that predicted that the PBBA market would be $5.9 billion by the […]

1 comment Read the full article →

The myth of video anonymity

December 13, 2016

Artificial Intelligence has achieved breakthroughs that directly affect documentary and investigative reporting, or any video where participants need anonymity. Thanks to advances in artificial intelligence (AI), standard methods of cloaking identities through pixelation and audio adjustment are much less effective than they were even five years ago. Lives may be at stake AI, maybe you’ve […]

0 comments Read the full article →

Nantero raises $21 million and that’s good

December 13, 2016

Nantero raised a $21 million round from investors. The company is one of StorageMojo’s favorite NVRAM vendors, because carbon nanotubes. I also like the fact that their process can use existing fabs, even fully depreciated ones, to build high-density vertical NVRAM. The business model takes after ARM rather than Intel, which means they can harness […]

0 comments Read the full article →

Cloud market heats up

November 18, 2016

In 2014 Gartner estimated that Amazon Web Services had 5x the utilized compute capacity of the rest of the cloud providers. There’s a couple of qualifiers there – utilized, compute – but as a rough guess, it looked like AWS had around an 80% market share. But no more. In a recent report Synergy Research […]

1 comment Read the full article →

When is a feature a bug?

November 17, 2016

Ten years ago in Enterprise IT: the elephant’s graveyard I wrote about the upmarket trap: Engineering and marketing find it easy to justify fun new technology since a 10% goodness increase on a $500,000 machine is worth $50,000, while on a $1,000 machine it is worth $100 only if the customer is knowledgeable enough to […]

1 comment Read the full article →