Enterprise open source storage software adoption & the cloud

by Robin Harris on Friday, 14 March, 2014

Good article today in SearchStorage about enterprise open source storage software (OSSS) adoption. StorageMojo is quoted, but the I found the survey results from the OpenStack Foundation interesting:

OpenStack’s website lists more than 70 user groups around the world, and the uptake is reflected in a survey of cloud operators and end users done last year by the OpenStack User Committee and Foundation. Based on 822 survey responses, the staff cataloged 387 OpenStack cloud deployments in 56 countries, with storage and backup ranking sixth among the top applications or workloads.

The survey indicated that 173 respondents use OpenStack object storage features. Eight deployments had more than 1 million stored objects, including one with more than 500 million, and 22 implementations had more than 100 TB of block storage.

That’s impressive as a proof of concept alone.

What about uptake?
TheInfoPro’s Marco Coulter wonders if open source storage software will ever find success in the enterprise:

If you think of how it went in the operating system market, Linux crept into the colleges and then sort of crept into the enterprise. Then vendors arrived delivering support of it and making it able to be purchased from a vendor. We’ve never really seen that same pattern in storage. I don’t think it’s a certainty that it will fit for the enterprise.

The StorageMojo take
Three years ago StorageMojo might have agreed with Mr. Coulter, expecting that scale-out storage appliances like today’s Nutanix would vacuum up the market. And while Nutanix will certainly be successful, the dogfight brewing in the IaaS space with Google’s newly aggressive pricing will pressure IT to take another look.

IaaS storage prices are dropping almost weekly. While CFOs probably don’t understand why, they can read a price list. They’ve listened to IT’s justifications for costly storage for years and now they have a simple $/TB comparison.

CIOs may not be comfortable with OSSS, but they’ll have to show the CFO that they can respond to a changing environment. The CIO’s choice is stark: external or internal. And since local control of storage is often key to predictable and reliable performance, OSSS will get another look.

It won’t be easy – few IT shops have the expertise to integrate commodity hardware with OSSS today – but IT shops face a hollowing out as IaaS gets more competitive. If OSSS suppliers can offer one-click installs and reference configurations on standard hardware, they could tip the balance and find much faster acceptance than anyone expects.

Courteous comments welcome, of course.

{ 0 comments }

Is Violin Memory done?

by Robin Harris on Thursday, 13 March, 2014

Remember this sad story from StorageMojo:

They haven’t reported financials for almost 3 quarters. Their stock is trading at about 20% of its peak. They fired their CEO. . . . And NetApp was trying to strangle [them] (see NetApp filers for $1/GB?) in its crib.

Are they goners?

I don’t think so.

Was I foolishly optimistic? Maybe. And yet Isilon managed to hang on and succeed.

Why the history?
I tweeted Friday that

Violin Memory gets new CMO from EMC. Good move: why buy new XtremIO when mature flash offered?

This got a couple of responses, first from @ahl or Adam Leventhal, formerly of the ZFS team:

@StorageMojo do you really think that Violin can turn it around? Would you advise a customer to buy Violin over XtremIO?

And later from @StoragePro, who works for NetApp.

@StorageMojo Violin let go 20% of their people. That is the kiss of death.

Can Violin turn it around?
I think so. Look at their assets:

  • Revenue. Over $100m. That means customers who like the product.
  • Technology. The Violin architecture is unique – see StorageMojo’s Video White Paper – and, AFAIK, still offers the lowest and most predictable latency of any flash array.
  • Financial strength. Violin has over $130m in the bank and some strong backers such as Toshiba. Laying people off is painful, but that’s how you stretch the runway.
  • New management. The new team must have been promised investor support or they wouldn’t have signed on.

Second question
Would I recommend Violin over XtremeIO? Of course!

Why? Because a storage product that’s been out for 5 years is inherently more mature and stable than one that’s been out for a few months.

I’ve been through plenty of beta programs and try as they might they never catch most of the bugs. Only customers can do that, so more customers and more runtime equals more found and fixed bugs.

Violin’s 1st gen product was bare-bones, but it worked and got them lots of customer feedback. The 2nd gen 6000 series incorporated that feedback and has been much more successful.

The big knock on Violin has not been hardware but software. I’ve heard that they’ve been working on that and their new Maestro Suite is part of the software offensive. Expect to see more.

The StorageMojo take
Few companies get as far along as Violin Memory has. But Isilon’s problems were worse and they survived and prospered. But it wasn’t easy, especially with the big guys spreading FUD.

VMEM’s new team has its work cut out for them, but turning the company around is not an impossible dream. Stop the bleeding. Focus on sales and support. Show progress and profitability.

And then all this will be forgotten in 5 years.

Courteous comments welcome, of course. I’ve done work for Violin and I’ve liked the technology since the original team briefed me on it years ago. In fact, I bought some stock a while ago because of the analysis I’ve laid out here – but you’re welcome to believe I’m a supporter now because I bought them, but I’m not the only one who thinks they’re undervalued. And no, sadly, I didn’t buy Isilon back then.

{ 4 comments }

Cloud and the current infrastructure brawl

by Robin Harris on Friday, 7 March, 2014

Thoughts about cloud.

The economic basis is two-fold: economies of massive scale; and, commodity parts.

Scale
The corollary to massive scale is monoculture. Monocultures have their advantages – look at America’s corn-growing prowess – but their economic advantages also bend use cases to sub-optimal ends: ethanol; high-fructose corn syrup. Or Elastic Block Storage.

There are a few natural tech monocultures. Everyone is on Facebook because everyone is on Facebook and no one is on Google+. Everyone buys Office because everyone already has Office.

Unlike, say, utilities, tech monopolies tend to rise and fall. PCs no longer rule. Windows and x86 domination are imperiled.

The current monopolists, seeking growth, are encroaching on each other’s formerly unchallenged turf: Cisco into storage and servers; HP into networking; Microsoft into mobile; IBM into cloud; Amazon and Google into almost everything.

Another couple of years and we may see serious competition for infrastructure dollars. But where?

Commodity
VCs should stop reading right now because what is coming isn’t pretty. What’s already happened in servers – gross margins in the 15-30% range – is headed to networks and storage.

Software defined networking? Commoditization.

Scale-out storage? Commoditization.

It won’t be easy, because the big players want to stay big. They’ll muck around in the standards bodies and open software groups to show they “get it” with no intention of getting to four nines quality – unless you buy it all from them.

That happened with Fibre Channel. That’s why SANs never got the network effects we’ve enjoyed with Ethernet. Sure, that stunted the SAN market, paved the way for NAS – which didn’t take off until after SAN’s promise had faded – and now cloud, but the margins were great until Amazon took the punch bowl away.

The StorageMojo take
Three business models suggest themselves for this brave new world.

  • The Red Hat model of nearly free software with services.
  • Commodity-based, channel-friendly appliances with a gross margins in the 40s (or a really great TCO story).
  • Scale-out converged computes and storage that starts in the $20s.

Red Hat is doing the Red Hat thing darn well. They’re a $1.5B company with a stratospheric P/E ratio and a market cap near NetApp’s. And they’re investing 20% of sales in R&D.

Nimble Storage is kicking butt with the appliance model and a killer TCO story. I expect their GM will be in the 50s if not the 60s.

Nutanix is possibly doing even better than Nimble in the converged scale-out category. They could do better on entry level pricing, but us Americans like big numbers and their average sale price is about double Nimble’s. But they offer a better deal than Cisco’s UCS even now.

Scaling up is difficult, but scaling down the Google/Amazon infrastructure may be harder. But unlike the past many companies are thirsting for change because the cloud has made IT costs much more transparent. The demand is there.

Amazon will remain a fast-moving target, but the advantages of local infrastructure are worth something too. The next big winners will figure out how to unlock that value.

Courteous comments welcome, of course. What say you?

{ 1 comment }

FAST ’14: the big picture

by Robin Harris on Monday, 24 February, 2014

StorageMojo isn’t done reviewing papers, but this post is about the bigger picture that emerged from the papers and presentations. Perhaps this is pattern-finding gone mad – intuition – but hey! – perhaps not.

Here are the bones of trends observed from the research and conversations at FAST ’14.

  • Energy use is the coming measure of computational efficiency. This isn’t about green data centers, but using energy consumption as instrumentation to understand system level performance and efficiency. Starts with mobile devices, where the payoff is the greatest, but it is moving up to servers and larger systems.
  • Hybrid systems are gaining momentum – or put another way – storage is getting more colorful. System designers have been painting in black and white – DRAM and disk – and gray – cache – for decades. That’s what storage is “supposed” to look like. But the palette is getting more colorful, first with flash, and now with new technologies and concepts – such as advanced erasure codes – that mean more options for creative engineering.
  • Artisan computing – AKA enterprise legacy infrastructure – will come under much more economic pressure. We’ve only begun to tap the benefits of hyperscale computing and storage. The gap between enterprise and cloud costs and availability will continue to grow. Smart CIOs will retool their in-house staffs to deliver high-value custom services to compete.
  • How about a cloud service that runs legacy apps? Most don’t need high performance or much data. Lots of CFOs would sign off on that service.

    The StorageMojo take
    I’m told this was the largest FAST conference ever – over 500 attendees. What surprised me was the apparent paucity of hyperscale players from Facebook, Amazon, Google and Azure. They were there, but less visible than in the past.

    Which points to the democratization of advanced file and storage research – or perhaps a return to the old normal. These technologies are fundamental to a digital civilization – a transition we’ve just begun – and much remains to be done before we have the robust persistence we need.

    Courteous comments welcome, of course.

    { 0 comments }

StorageMojo’s Best Papers of FAST ’14

by Robin Harris on Friday, 14 February, 2014

StorageMojo publisher TechnoQWAN’s crack analysts have been poring over the FAST ’14 papers. After much contention and more than a few retries they have achieved consensus.

There is so much good work presented at FAST that it seems unfair to pick just a few for mention. Readers are encouraged to decide for themselves. Papers should be available on the FAST web site sometime next week.

Honorable Mentions
Automatic Identification of Application I/O Signatures from Noisy Server-Side Traces by Yang Liu, Raghul Gunasekaran, Xiaosong Ma and Sudharshan S. Vazhkudai.

Could this solve the the virtual machine I/O blender problem?

(Big)Data in a Virtualized World: Volume, Velocity, and Variety in Cloud Datacenters by Robert Birke, Mathias Björkqvist, Lydia Y. Chen, Evgenia Smirni and Ton Engbersen.

Analysis of a private cloud consisting of 8,000 physical boxes, hosting over 90,000 VMs using over 22 PB of storage to see how applications, CPU activity, growth, velocity, capacity and network demand interact. Nothing startling, but valuable work.

STAIR Codes: A General Family of Erasure Codes for Tolerating Device and Sector Failures in Practical Storage Systems by Mingqiang Li and Patrick P. C. Lee.

Advanced erasure codes are a major contributor to storage efficiency and robustness. This paper explores codes that take into account the correlated nature of block and device failures.

Evaluating Phase Change Memory for Enterprise Storage Systems: A Study of Caching and Tiering Approaches by Hyojun Kim, Sangeetha Seshadri, Clement L. Dickey and Lawrence Chiu.

StorageMojo liked this one so much it was featured on ZDNet and StorageMojo.

Best Papers
ViewBox: Integrating Local File Systems with Cloud Storage Services by Yupu Zhang, Chris Dragga, Andrea C. Arpaci-Dusseau and Remzi H. Arpaci-Dusseau.

Given the scale and cost advantages of cloud storage we want to use it everywhere we can, but it isn’t as safe as we want to believe. ViewBox attacks the problems of data corruption and data inconsistency between local and cloud storage.

Toward strong, usable access control for shared distributed data by Michelle L. Mazurek, Yuan Liang, William Melicher, Manya Sleeper, Lujo Bauer, Gregory R. Ganger, Nitin Gupta and Michael K. Reiter.

Even people who know something about computers find maintaining control over their online data difficult. The Penumbra distributed file system offers access controls designed to match user mental models for data classification and protection.

The StorageMojo take
The 25 papers to be presented at FAST 14 are all interesting. StorageMojo will be delving into more of them in the next few weeks.

Stay tuned!

Courteous comments welcome, of course. Thanks to Usenix for enabling StorageMojo to attend.

{ 0 comments }

Where does ReRAM fit?

by Robin Harris on Thursday, 13 February, 2014

Over on ZDnet this morning I wrote about a FAST ’14 paper modeling how a PCM SSD could be used in a hybrid – PCM SSD, flash SSD, HDD – storage system.

For an academic research paper, this one is refreshingly focussed on business case enabled by technology.

Any new NVRAM is going to cost way more than NAND flash on a per gigabyte basis, due to economies of scale, learning curves and short term market demand – NAND has a broad support ecosystem that the new tech won’t have. This is why the most likely successor will be built on current NAND fab lines.

Therefore the question: are there any leverage points in current hybrid arrays – such as Avere and Nimble – that would get an outsize benefit from a new NVRAM with new and improved, but costlier, characteristics?

From the current grab bag of new NVRAM – largely types of resistance RAM (ReRAM) – these characteristics could include

  • Speed. Much faster writes.
  • Endurance. What if we could write to NVRAM 10 million times instead of 10 thousand?
  • Stability. The more writes to flash the shorter the time the data will be held.
  • Power. What if we didn’t have to pump 20 volts to write NVRAM?
  • Density. Soon NAND will be bumping up against the physics and further shrinkage will be difficult or impossible.

This isn’t about storing some parameters in NVRAM, but about using dozens or hundreds of gigabytes to produce a faster, better, cheaper storage system.

The StorageMojo take
Any ideas, architects?

In the not-yet-publicly-available FAST ’14 paper Evaluating Phase Change Memory for Enterprise Storage Systems: A Study of Caching and Tiering Approaches, testing a 64GiB PCM SSD, authors Hyojun Kim, Sangeetha Seshadri, Clement L. Dickey and Lawrence Chiu of IBM Almaden Research conclude

Based on the results above, we observe that PCM can increase IOPS/$ value by 12% (bank) to 66% (telecommunication company) even assuming that PCM is 4× more expensive than flash.

Such numbers are not likely to drive commercial adoption, especially given the rate at which flash SSD prices are dropping. But the concept of leveraging the advantages of ReRAM in a hybrid system is intriguing.

Assuming you had complete system control – write a new file system; implement unique tiering algorithmg; optimize dedup or snapshots – where are the leverage points that would justify a costlier NVRAM?

Courteous comments welcome, of course. This post is all about the comments, so wax eloquent!

{ 1 comment }

Frontiers of storage: plasmonics & metamaterials

by Robin Harris on Friday, 7 February, 2014

Two words: plasmonics; metamaterials. These could reshape storage over the next 20 years.

What are they?
Plasmons, specifically surface plasmon waves, are generated when light interacts with a metal. The free electrons in the metal support a wave of charge density fluctuations on the metal’s surface.

If you’ve seen dichroic glass, you’ve seen plasmons in action. It is the optical plasmonic resonances induced by metal nanoparticles that produce the changing colors in the glass.

The Romans made dichroic glass almost 2,000 years ago. Here’s the best example, the Lycurgus cup, first lit with a flash and then lit from behind.
Green_Lycurgus_Cup

red_lycurgus_cup

Metamaterials are artificial structured devices that operate on a scale smaller than the wavelength of an external electromagnetic or photonic source. What this means is that optical lenses can be fabricated that break the normal rules of optics, for example, a negative refractive index.

Researchers have fabricated lenses a few microns in diameter with a focal length of a few microns operating with visible light. This scale is compatible with chip manufacturing technology and could be used in on-chip optical applications.

And this relates to storage how?
Nano-holograms are one potential of plasmonic metamaterials. In the paper Metasurface holograms for visible light researchers at Purdue demonstrate

. . . holographic images generated at a wavelength of 676nm by a 30-nm thick planar metasurface hologram, consisting of an array of phase-controlling plasmonic nanoantennas.

To put that in perspective, 30-nm is about 1/23rd of the optical wavelength, where conventional holograms would require almost 700-nm thick material. And these holograms encode both phase and amplitude, making it feasible to produce extremely small, low-noise, high-resolution images.

Here is an example:

Screen Shot 2014-02-07 at 1.41.14 PM

Or how about a SPASER, or surface plasmon amplification by stimulated emission of radiation, an extremely compact source of coherent light. Standard laser sizes are limited by the diffraction limit of light, so a much smaller laser would enable uses not feasible today.

The StorageMojo take
Basically, metamaterials and plasmons enable new ways of writing and reading. While I’ve stressed the optical options, plasmons can also be magnetic, perhaps making them more applicable to today’s hard drives.

Of course, this is all very far from products. But I expect to see some of this technology in mass produced storage in 15 years or less.

Courteous comments welcome, of course. Materials science and nanotechnology are just getting rolling. I want to keep watching this to see where it goes. Any storage researchers looking at this?

{ 0 comments }

Frontiers of storage: magnetic holography

by Robin Harris on Monday, 3 February, 2014

Introducing an irregular series on speculative storage technologies. StorageMojo likes emerging technologies, but these are still gestating and may never emerge. Magnetic holography is first.

Optical holography has gotten lots of funding over the years – most recently with $100m for InPhase – but holography isn’t limited to optical. Holography is the storage of information in the wave interference pattern produced by two beams, one that bounces physically or virtually off whatever we seek to record and the second coherent background.

Acoustic holograms are used in seismic; microwave holograms in radar. Now, magnetic holography, in a paper by F. Gertz, and A. Khitun of UC Riverside and A. Kozhevnikov and Y. Filimonov of the Kotel’nikov Institute of the Russian Academy of Sciences. Their paper Magnonic Holographic Memory (pdf) describes a solid-state device they’ve fabricated that uses spin wave interference to store data.

A spin wave is

. . . a collective excitation due to oscillation of electron spins in a magnetic lattice. Similar to phonons (lattice vibrational waves) spin wave propagation occurs in magnetic lattices where spins are coupled to each other via the exchange and dipole- dipole interactions and a quantized spin wave is referred to by a quasiparticle called a magnon.

Got that?

It is a subset of spintronics with some attractive properties: longer coherence, i.e. multiple coupled spins that are more robust than single spin; and, operational at room temperature. These devices can be fabricated using semiconductor methods, though much remains to be done to achieve a commercial device.

Conceptually the device reminds me of acoustic mercury delay lines – a very early type of digital storage – in that the information propagates through the magnetic matrix and is read on the other side. Longer-term storage requires refreshing the signal, but there looks to be a possible ultra-low power technique.

The authors have fabricated a two bit device and tested it. They believe it can be scaled down into nanometer devices to achieve storage densities of 1Tb/cm2. Here’s a diagram of the device from the paper:
Screen Shot 2014-02-02 at 9.39.50 PM

The StorageMojo take
Semiconductor magnetic media? Not your father’s disk drive – or this generation’s NAND flash. This is basic research, not product development, but it points out that there’s more than one way to do holography and magnetic storage.

While Seagate and WD are making progress on much higher density disk storage, it is evident that the days of 40% annual increases in areal density are behind us. It’s good to know that there may other ways to create ultra-dense storage.

Courteous comments welcome, of course. This is the 2nd interesting storage physics paper I’ve seen from UC Riverside. Keep up the good work!

{ 0 comments }