The array IP implosion

by Robin Harris on Monday, 23 May, 2016

We’ve seen this movie before
The value of legacy array intellectual property is collapsing. This isn’t complicated: SSDs have made IOPS – what hard drive arrays were optimizing for the last 25 years – easy and cheap.

Think of all the hard-won – well, engineered – optimizations that enabled HDD-based arrays to dominate the storage market for much of the last 20 years.

  • RAID codes. Software disk mirroring – a huge money-maker in the 80s – moved into small, cheap(er) controllers, followed by RAID 5 and, later, RAID 6.
  • Caching. Given the bursty nature of most storage traffic, controller-based caches dramatically improved average performance.
  • Redundancy. RAID managed – but never solved – the problem of disk failure, but drive interface, driver, and array controller redundancy issues – such as cache coherency – required lots of careful problem solving.
  • I/O stack. This wasn’t typically an storage vendor problem, but there was lots of collaboration.

While these techniques remain relevant, the massive “storage operating systems” embedded into legacy storage arrays are now boat anchors, dragging performance down while remaining costly in support and CPU cycles. That’s been the problem plaguing VMAX and VNX ever since EMC embedded the first STEC SSD into them: the architecture allowed only a fraction of the possible SSD performance to be achieved.

Remember the minicomputer?
Joe Tucci does. He was the CEO of Wang Labs after it emerged from the PC holocaust that took down all the minicomputer companies.

The PC started the process of destroying decades worth of intellectual property value held by vertically integrated computer companies. Newcomers, like Dell, could buy CPUs from Intel, an OS from Microsoft, RDBMS from Oracle, disks from Seagate and networking from 3Com and Novell at much lower cost than the DECs and Data Generals could continue to upgrade their own products.

Today, the variety of storage alternatives – cloud, AFA, hybrids, converged, etc. – is destroying the value of legacy array software and architectures. The increasing pace of storage change from new non-volatile memories will accelerate the process.

The StorageMojo take
The destruction of the minicomputer industry by Wintel is not ancient history. Then why have EMC and NetApp been so slow to respond?

To be fair, EMC’s Tucci has been more active than NetApp’s team in taking on cloud and flash. But selling EMC to Dell – after failing to persuade HP to buy it – shows what Tucci thinks of EMC’s chances as a standalone storage company.

Storage systems will remain on the market. There are a number of options available today that are as cheap, or cheaper, than cloud storage – and earning good margins. More on that in another post.

Courteous comments welcome, of course.

{ 0 comments }

WD is not a disk drive company – and not a moment too soon

by Robin Harris on Friday, 20 May, 2016

While you weren’t looking Western Digital stopped being a hard drive company, morphing into a storage company. Such transitions are nothing new for a company that started life making calculator chips in the 1970s, morphed into SCSI, ATA and graphics in the 80s, and built its disk drive business in the 90s and 00s.

The closing of the SanDisk deal puts an exclamation point on the transition, but it started in 2011 with the acquisition of HGST. The acquirer of IBM’s disk operations has since acquired Skyera – winner of 2012’s content-free announcement award, Amplidata and server-side flash vendor Virident (wonder how integrating that with SanDisk’s Fusion-io will go?).

Amplidata’s software is the basis for the HGST Active Archive System object store. Since the web site refers to “Systems” we can expect more system products from HGST.

The StorageMojo take
It’s B-school chestnut: the railroads thought they were in the railroad business – instead of transportation – so they lost out to truckers. Cheap flash IOPS has destroyed the value of HDD-optimized array controllers – which has dramatically reduced the cost of entry into storage systems.

Add to that advent of sophisticated remote management – much advanced over 90’s “call home” features – and much of the rationale for a costly enteprise sales and support force goes away. That further lowers the market entry bar – and rips even more value out of legacy vendor infrastructure – not that Michael Dell is likely to notice for a few years.

Of course, the sweet spot for entry is the sub-$50k storage system price band. The legacy vendors don’t play there, so buyers aren’t expecting 3 martini lunches and catered golf events. But as AWS has demonstrated, customers will give up a lot for a low price.

Expect to see Seagate follow suit. Samsung and Toshiba might as well, but both are distracted by other problems.

Congratulations to the WD exec team on yet another well-executed pivot to a larger market. This will be fun to watch.

Courteous comments welcome, of course.

{ 0 comments }

Scale and the all-flash datacenter

by Robin Harris on Monday, 9 May, 2016

There’s a gathering vendor storm pushing the all-flash datacenter as a solution to datacenter ills, such as high personnel costs and performance bottlenecks. There’s some truth to this, but its application is counter-intuitive.

Most of the time, storage innovations benefit the largest and – for vendors – most lucrative datacenters. OK, that’s not counter-intuitive. But in the case of the AFDC, it is smaller datacenters that stand to benefit the most.

Why?
It’s a matter of scale. Small, low data capacity datacenters are naturals for all flash. The initial cost may be higher, but the simplification of management and the generally high performance make it attractive.

Your databases go fast with little (costly) tuning and management. VDI is snappy. Performance-related support calls – and their costs – drop off. Ideally, SSD failures will be lower than HDDs, but make sure you’re backing up, due to the higher rate of data corruption on SSDs.

Scale drives this because even though flash may be only 5-10x the cost of raw disk capacity, as capacities grow the media cost – SSD and/or HDD – comes to dominate the The costs for an array controller and associated infrastructure outweigh the media cost until some threshold capacity is reached.

This explains why Nimble’s average customer is interested in AFDC’s, while Google, Facebook and AWS, aren’t. Nimble’s SMB customers are a fair example of where AFA will often make sense.

Where is the cutoff? Today it looks like 250 to 350 TB is where it makes sense to include disk in your datacenter. It’s not likely that you’ll be pounding on 300TB enough to justify flash. But expect the cutoff to rise over time, as it has for tape.

The StorageMojo take
The scale vs cost issue isn’t new. Tape continues as a viable storage technology because the cost of the media is so low. But tape’s customer universe is limited because for more and more users backing up to disk, or cloud, object stores is a cost/functional equivalent.

What is new is that disk is going down the same path that tape has been on for decades. The bigger problem for HDDs is that the PC market – the disk volume driver – continues to shrink while flash takes a larger chunk of the remaining PC business. Disk vendors have to adjust to a lower volume market, just as tape vendors have.

Lest SSD vendors get complacent, the really high performance database applications are going in-memory. It’s a dogfight out there! And even more changes are in store.

Courteous comments welcome, of course. In your experience, where is the cutoff point?

{ 6 comments }

Why storage is getting simpler

by Robin Harris on Monday, 2 May, 2016

Goodbye, old bottleneck
StorageMojo has often asked buyers to focus on latency rather than IOPS thanks to SSDs making IOPS cheap and plentiful. This naturally leads to a focus on I/O stack latency, which multiple vendors are attacking.

But what are the implications of cheap IOPS for enterprise data center operations? That’s what’s motivating the secular trend for simpler storage.

First things
Is data center storage getting simpler? It’s clear that the big iron arrays are sucking wind – EMC’s VMAX/VNX (Information Storage) group saw peak sales in Q4, 2014 – and I forecast that declining trend will accelerate in coming quarters and last for years.

We are also seeing a trend play out in a variety of product categories. The rise of data aware storage from Qumulo and Data Gravity makes it much simpler for less skilled staff to identify storage issues.

The converged and hyper converged platforms roll storage management in with systems management. The advanced remote monitoring and sophisticated but easy to use DR features from Nimble are another example.

And, of course, the simpler object storage interfaces of cloud vendors, who also remove most of the management overhead from corporate IT. The software-only, commodity-based vendors whose products partly compete with cloud, also get it: Scale Computing’s first word on their website is “Simple”.

Why now?

  • Cost. The driver. People are ≈70% of enterprise data center cost. Simpler storage = fewer and cheaper people.
  • Flash. Tuning HDD-based arrays for performance took a lot of knobs and dials – and people who understood them. Flash makes high performance a given.
  • Cloud. Cloud is the vise crushing EDC costs. CFOs who don’t know a switch from server can read AWS prices and put the heat on CIOs.
  • Scale. Everyone is handling much larger data stores now, so automation is a necessity.

The StorageMojo take
The IT equivalent of Formula 1 race tuning won’t disappear: some apps will always require the utmost performance. But the huge mass of users will take lower costs over the last possible IOP.

The losers are the systems that make customers pay for features they no longer need. Winners will successfully blend ease of use with performance and availability – at a competitive price.

Is the storage admin an endangered species? Yes. Their numbers wills shrink as the complexity that makes them necessary declines.

Courteous comments welcome, of course.

{ 4 comments }

Storage surprises at NAB 2016

by Robin Harris on Friday, 22 April, 2016

I did NAB a little differently this year: attended on Wednesday and Thursday, the last two days of the floor exhibits. Definitely easier, although many of the execs left Wednesday.

NABShow_logo

But that wasn’t a surprise. Here’s what did surprise me:

  • EMC seemed to have less of presence than in past years. I expected more.
  • HGST is pushing aggressively on its – and WD’s – systems business. They’re one to watch.
  • Thunderbolt 3 storage is definitely a thing: 40Gb/s of bandwidth essentially for free? Or 20Gb/s for even less? Of course!
  • Thunderbolt-based clusters may also be a thing. Need to learn more.
  • Several companies I hadn’t seen before: OpenIO, Quobyte, Symply, Glyph and Dynamic Drive Pool. The last had a good-sized booth and has been in business for 10 years – but I’d never heard of them.
  • Video – 4k/8k, drone, 360°, surveillance, streaming, phone – are all growing rapidly. OK, not 8k – yet.

The StorageMojo take
I’ll be writing some more about what I saw at NAB. I also asked a number of companies for briefings, including Pure.

There are some larger trends, beyond hardware, that I saw. The big one: complex storage systems are on the way out. More on that later.

Courteous comments welcome, of course.

{ 1 comment }

NABster 2016

by Robin Harris on Monday, 18 April, 2016

Tomorrow the top StorageMojo superforecasting analysts are saddling up for the long ride to the glittering runways of Las Vegas. The target: NAB 2016.

As much as I like CES, NAB is my favorite mighty tradeshow. It is toy show for people with very large budgets – and we all know who gets the best toys.

The StorageMojo take
If you have some storage coolness you would like to share with the world, shoot me a comment with your show floor address and I’ll come for a visit. Really. I’m looking for you!

{ 2 comments }

Superforecasting

April 18, 2016

I see a forecast in your future A few months ago I wrote about the best single metric for measuring marketing. That metric: It’s the forecast, when compared to actuals. If the forecast is accurate to ±3%, you’ve got great marketing. If ±10% you’ve got good marketing. So I was happy to see a book […]

0 comments Read the full article →

Smart storage for big data

April 15, 2016

IBM researchers are proposing – and demoing – an intelligent storage system that works something like your brain. It’s based on the idea that it’s easier to remember important, like a sunset over the Grand Canyon, than the last time you waited for a traffic light. We’re facing a data onslaught like we’ve never seen […]

1 comment Read the full article →

Hike blogging: Soldiers Pass

April 12, 2016

If you’ve been wondering why the dearth of hike blogging the last few months, wonder no more: I wasn’t hiking. A nasty bug made its way through Arizona and I caught it, thought I shook it, went to CES, and relapsed, big time. So I’ve been taking it easy. But I’ve started up again, and […]

0 comments Read the full article →

Qumulo comes of age

April 12, 2016

Qumulo is crossing the chasm: they have 50 paying customers with over 40PB in production. Real production, not POCs. That includes clusters from 4 nodes to more than 20 nodes with over 4PB at a large telco. They practice agile development with 24 software releases in the last year. Roughly a drop every two weeks. […]

0 comments Read the full article →

Plexistor’s Software Defined Memory

April 5, 2016

What is Software Defined Memory? A converged memory and storage architecture that enables applications to access storage as if it was memory and memory as if it was storage Unlike most of the software defined x menagerie, SDM isn’t simply another layer that virtualizes existing infrastructure for some magical benefit. It addresses the oncoming reality […]

2 comments Read the full article →

So, how much will Optane SSDs cost?

April 1, 2016

I opined recently on ZDNet that I expected Optane SSDs would come out at $2/GB. Josh Goldenhar of Excelero had a thoughtful rejoinder: . . . I think Octane will be more expensive. You mentioned $0.20/GB for flash – but I think that’s for SATA flash or consumer flash. The Intel NVMe p DC3XXX line […]

0 comments Read the full article →

StorageMojo’s 10th birthday

March 29, 2016

On March 29, 2006, StorageMojo.com published its first posts to universal indifference. The indifference didn’t last long: the second week of StorageMojo’s existence I published 25x Data Compression Made Simple. The post was /.’d and the vituperation rolled in claiming no such thing was possible: over 400 mostly negative comments on /. and dozens more […]

6 comments Read the full article →

CloudVelox: building a freeway into the cloud

March 23, 2016

You have a data center full of Windows and Linux servers running your key applications. How do you migrate them to the cloud; or, at the very least, enable cloud-based disaster recovery? That’s the question CloudVelox is trying to answer. Their software enables enterprises to move their entire software stack to a public cloud. I […]

2 comments Read the full article →

Integrating 3D Xpoint with DRAM

March 21, 2016

Intel is promising availability of 3D Xpoint non-volatile memory (NVM) this year, at least in their Optane SSDs. But Xpoint DIMMS are coming soon, and neither will give you anything close to 1,000x performance boost. In a recent paper, NOVA: A Log-structured File System for Hybrid Volatile/Non-volatile Main Memories Jian Xu and Steven Swanson of […]

0 comments Read the full article →