Hike blogging: Deadmans Pass panorama

by Robin Harris on Saturday, 25 March, 2017

I’ve been hiking a lot the last couple of weeks, getting in shape after a long hiatus. Today I took a loop that I’ve never done counterclockwise and even though it shouldn’t have made much difference, it was a much more enjoyable hike.

The loop began with an easy walk of about a mile on the Long Canyon trail, until it meets Deadmans Pass trail. From there it is about another mile to another favorite, Mescal trail, with beautiful views of the lower end of Boynton Canyon and, later, to the rocks to the south.

About halfway through the cloudy sky gave way to partly cloudy, allowing the sun to light up the rocks. Here’s a panorama I took from Deadmans Pass trail looking from the west to the north. Enjoy!

Click to enlarge.

The panorama is extra large – 4000 pixels wide – so there’s a lot to look at. If you look carefully you can see a couple of buildings of the Enchantment resort that is nestled into the mouth of Boynton Canyon.

The StorageMojo take
Being spring break, lots of people were enjoying the trail, both hikers and bikers. I’m glad they did, because more rain is expected this afternoon.

{ 0 comments }

DSSD’s demise

by Robin Harris on Wednesday, 22 March, 2017

A couple of weeks ago Dell EMC announced the demise of the once promising DSSD all flash array. They are planning to incorporate DSSD technology into their other products.

As StorageMojo noted 4 years ago, DSSD developed a lot of great technology. But for whatever reason – perhaps turmoil associated with Dell’s purchase of EMC? – EMC’s less-legendary-than-they-used-to-be salesforce couldn’t move the boxes.

The StorageMojo take
The AFA market is full up with aggressive competitors, and any new AFA is going to face tough sledding. The entire AFA market is facing a dry spell as flash prices have firmed up by 25% in the last six months, and look to remain high well into next year. Hybrid arrays will benefit, as disk capacity has resumed its upward march thanks mostly to shingled magnetic recording, and drive prices are still dropping.

I suspect DSSD’s Silicon Valley ethos had problems integrating into EMC, a company known for full contact politics. And the purchase agreement undoubtedly had multiple milestones, which over time became a straight jacket instead of a carrot.

But the plan – even if it’s just a face saving gambit – to incorporate DSSD technology into other products, has StorageMojo’s full support. As StorageMojo stated 3 years ago:

. . . EMC could use DSSD as a VMAX backend, probably with thrilling performance. So why not mention that? You get all the wonderful software features of VMAX – and a big performance boost!

That was the obvious play then. Today, even a massive shot of hardware Viagra – such as DSSD – can’t save VMAX. If only they’d listened.

Courteous comments welcome, of course.

{ 1 comment }

Avere closes new round – with a twist

by Robin Harris on Tuesday, 21 March, 2017

Avere announced this afternoon that they’ve closed a Series E round of $14 million, bringing their total funding to a cool $97 million. Existing investors Menlo Ventures, Norwest Venture Partners, Lightspeed Venture Partners, Tenaya Capital and Western Digital Capital all re-upped, always a good sign.

But the twist? Google Inc. joined the round.

The StorageMojo take
This makes perfect sense for Google and Avere. Avere’s products front-end legacy filers, extending theirs useful lives, with load-balancing, reduced management, and what is essentially a very smart cache. Front-ending the cloud is a logical extension of their business.

That’s in contrast to NetApp and Dell/EMC, whose end-game for their cloud support seems to be death by a thousand cuts. NetApp would be wise to buy Avere in order to take control of the process that customers are already undergoing: getting rid of filers in favor of the cloud.

Given that $14 million isn’t a huge E round for a hardware company, I’m expecting an IPO, perhaps even this year. The Avere team has done a commendable job, and a rich buyout or an IPO is well-deserved.

Courteous comments welcome, of course.

{ 0 comments }

HP offers to buy Nimble Storage

by Robin Harris on Tuesday, 7 March, 2017

HPE’s has offered to buy Nimble Storage for $1.09B.

The StorageMojo take
This is a good move for both companies. HPE has the enterprise footprint that Nimble was spending big to build, and Nimble has an advanced and forward looking storage platform that will bring new ideas into HPE’s enterprise storage group, as well as a cost-effective line of products that don’t have much overlap with existing products.

I don’t expect to see the kind of cultural integration issues that Dell is facing with its EMC acquisition either. HPE may be old-line, but it is old-line Silicon Valley – not brash south Boston – and Nimble’s SV culture is a much better fit.

Meg Whitman is stealing a page from Joe Tucci’s playbook: becoming a technology publisher, rather than building everything from scratch internally. That’s what EMC did with Isilon, VMware, DSSD, Data Domain and others. While the strategy has its costs – disparate product lines mean added engineering and support costs – it leverages costly enterprise footprint.

NetApp’s failure to launch an internally developed flash array is a great example of why building from scratch internally is problematic. Rather than a clean sheet design, internal folks can pile on requirements that may make sense as a line extension, but no sense at all from a customer perspective.

I worked with Nimble in their early days – emerging technologies, products and companies are my focus – and they combined a strong architecture with an experienced team that led to – mostly – excellent execution. StorageMojo congratulates Nimble on a great run.

Courteous comments welcome, of course.

{ 0 comments }

fsck interruptus and your data

by Robin Harris on Thursday, 2 March, 2017

Today is the last day of FAST 17. Yesterday a couple of hours were devoted to Work-in-Progress (WIP) reports.

WIP reports are kept to 4 minutes and a few slides. One in particular caught my eye.

In On Fault Resilience of File System Checkers, Om Rameshwar Gatla and Mai Zheng, posed an interesting question: how fault resilient are *nix fsck file system checkers?

This really happened
New Mexico State’s Texas Tech’s HPC center had a power failure. Once power was restored, file system checking commenced. But then there was another power failure, which led these grad students to look at whether or not an interrupted checking process would further damage data integrity.

Why isn’t the answer ever no?
Bad news: yes, fsck interruptus can further corrupt data. Good news: it doesn’t always.

More bad news: fsck probably can’t fix the damage it produced on the second go.

The StorageMojo take
On my more pessimistic days I sometimes wonder that we have any uncorrupted data stored anywhere. But yes, our storage infrastructure often works, so that’s something.

This is just one more gotcha to be aware of. I hope Om and Mai can extend this research to further understand the sources of further corruption and figure out how to make fsck more robust.

Courteous comments welcome, of course.

{ 2 comments }

StorageMojo’s Best Paper of FAST 2017

by Robin Harris on Wednesday, 1 March, 2017

StorageMojo’s crack analyst team is attending the Usenix File and Storage Technology (FAST) ’17 conference. As usual, there is lots of great content.

But only one paper – this year – gets the StorageMojo Best Paper nod. The conference awarded two Best Paper honors as well – so you have plenty of reading to catch up on – which are Algorithms and Data Structures for Efficient Free Space Reclamation in WAFL and Application Crash Consistency and Performance with CCFS. These are fine papers, but StorageMojo likes another one even better.

Redundancy Does Not Imply Fault Tolerance: Analysis of Distributed Storage Reactions to Single Errors and Corruptions by Aishwarya Ganesan, Ramnatthan Alagappan, Andrea C. Arpaci-Dusseau, and Remzi H. Arpaci-Dusseau, all of the University of Wisconsin, Madison.

From the abstract:

We analyze how modern distributed storage systems behave in the presence of file-system faults such as data corruption and read and write errors. We characterize eight popular distributed storage systems and uncover numerous bugs related to file-system fault tolerance. We find that modern distributed systems do not consistently use redundancy to recover from file-system faults: a single file-system fault can cause catastrophic outcomes such as data loss, corruption, and unavailability.

The researchers built a fault injection system to test the file systems. Earlier studies have looked at ZFS, Ext4 and other, so this paper looked at other popular systems: Redis, ZooKeeper, Cassandra, Kafka, RethinkDB, MongoDB, LogCabin, and CockroachDB.

The paper’s chief conclusion:

. . . a single file-system fault can induce catastrophic outcomes in most modern distributed storage systems. Despite the presence of checksums, redundancy, and other resiliency methods prevalent in distributed storage, a single untimely file-system fault can lead to data loss, corruption, unavailability, and, in some cases, the spread of corruption to other intact replicas.

Yikes!
That doesn’t sound good, and it isn’t. For example:

. . . a single fault can have disastrous cluster-wide effects. Although distributed storage systems replicate data and functionality across many nodes, a single file-system fault on a single node can result in harmful cluster-wide effects; surprisingly, many distributed storage systems do not consistently use redundancy as a source of recovery.

The StorageMojo take
These conclusions should concern any user of scale-out storage. And if you are a developer of scale-out storage, you should certainly read this paper and the CCFS paper.

But there’s a larger, systemic issue, that file system developers need to address. For some reason – inertia probably – the file system community has been slow to embrace formal verification methods. That’s just silly.

Our digital civilization depends on our file systems. It’s past time to bring them into the 21st century.

Courteous comments welcome, of course.

{ 1 comment }

It’s simple arithmetic: why Trump’s immigration stance is 100% wrong for America

February 25, 2017

An editorial comment This isn’t complicated. America has 325 million people, out of the world’s 7.4 billion, or about 4.4% of world’s population. Despite the numerical imbalance, America also has the world’s largest economy by most measures. There are two countries, China and India, which are several times the population of the United States. China’s […]

11 comments Read the full article →

The coming all flash array/NVMePCIe SSD dogfight

February 23, 2017

In this morning’s post on ZDNet on the diseconomies of flash sharing I discuss the fact that many NVMe/PCIe SSDs are as fast as most all flash arrays (AFA). What does that mean for the all flash array market? Short answer: not good Today a Dell PowerEdge Express Flash NVMe Performance PCIe SSD – ok, […]

9 comments Read the full article →

Why doesn’t storage innovation come from the storage industry?

February 20, 2017

For all the time and effort poured into the storage market over the last 20 years, surprisingly little innovation has come from storage vendors themselves. Why is that? Hall of shame EMC got its opening when IBM whiffed on the storage array business. IBM had no excuse, as the hard work had been done at […]

7 comments Read the full article →

Cloud integration now mandatory for storage

February 17, 2017

Spoke to the fine folks at Cloudtenna. Their thing: Cloudtenna is the first platform to generate augmented intelligence on top of your existing file repositories. The Direct Content Intelligence (DirectCI) agent uses deep machine learning to identify the files most relevant to each individual user, ushering in a new era of secure intelligent search, file […]

0 comments Read the full article →

Non-Volatile Memory Workshop ’17

February 15, 2017

StorageMojo’s crack analyst team will be attending the 8th Non-volatile Memory Workshop. Last year’s event attracted 230 participants. This isn’t the Flash Memory Summit, which focuses on flash memory as a storage technology and its commercial application. NVMW is an academic conference, so you won’t see many polished corporate presentations. Their mission statement: The workshop […]

0 comments Read the full article →

FAST ’17 starts in two weeks

February 13, 2017

FAST 17 starts in two weeks. AFAIK, StorageMojo was the first press to attend and report on FAST, starting in 2008. For years I had it to myself, but in the last few years other publications have started attending. That’s a very good thing, as storage is THE problem of a digital civilization. The more […]

3 comments Read the full article →

Why Amazon won’t be the IBM of cloud

February 1, 2017

IBM was the driving force in the computer industry beginning with the advent of the IBM 360 mainframe family. Their big idea was to build a family of computer systems that all ran the same software and, generally, used the same peripherals. The IBM 360 was a brilliant idea and a massive success, making IBM […]

4 comments Read the full article →

Hike blogging: Tavasci Marsh

January 30, 2017

The Verde Valley is green because it has water. The Verde River is Arizona’s 2nd longest river, at 170 miles (the Little Colorado is longer) and in my neck of the woods flows through the towns of Clarkdale, Cottonwood, and Camp Verde. Yesterday I took my first hike in months, so I chose an easy […]

0 comments Read the full article →

Scality reimagines storage as art

January 11, 2017

The fine folks at Scality send out a new year book of photos and – of course – promos. This year caught my attention because, as a fan of modern art, especially those with Cadillacs, they gen’d up a photo of a disk drive displayed like one of the famous Cadillac Ranch cars. Here’s an […]

5 comments Read the full article →