On retiring an old photo

by Robin Harris on Wednesday, 30 May, 2018

Early on a friend told me I should have a photo of myself on StorageMojo. It seemed like a good idea, so I choose one taken the year before (2005), by a tourist, at sunset on Big Sur.

Sunset gives great light – sunrise too – and I liked the photo’s rosy glow since I don’t have much color otherwise. Here’s the old pic:

But I recently had lunch with a storage luminary I had not seen for a decade, and as we sat down he did a double take as he looked at me.

I guessed it was time to get plastic surgery – or update my photo on StorageMojo.

No tourists were handy, so I took an iPhone X Portrait Mode selfie. Didn’t do much editing, so you could recognize me today from it.

The StorageMojo take
I’ve been doing a lot of writing on ZDNet, as well as a large commercial project and a novel. Something had to give, and StorageMojo was it. Updating the photo is one way to say I haven’t forgotten my favorite blog. I’m also planning to update the WordPress theme later this year.

In the meantime, I’m off to New Haven CT next week for a writer’s workshop on historical fiction. Even though the novel is up to 80,000 words, I’m still finding my voice.

Here’s to all our voices. May we find them and use them.

Courteous comments welcome, of course.


Hike blogging: Hangover Trail 3-31-18

by Robin Harris on Monday, 2 April, 2018

Haven’t been doing much hike blogging lately. Why? Haven’t been doing much hiking.

I’ve mostly been biking. Love my new ebike! Rarely take the car out of the garage.

But I’m not comfortable leaving the bike at trailheads. Yeah, the terrible friction of taking the car out of the garage! 1st world problems, eh?

But I did hike the Munds Wagon, Cow Pies, and Hangover Trails, a lollipop loop, on Saturday. I could tell you it was beautiful, but you can see for yourself.

Click to enlarge.

Road Trip!
Saturday I’m starting to drive to South Carolina. I’ll be visiting the Augusta, Aiken, Columbia and Myrtle Beach areas, and maybe Asheville, NC. The novel I’m writing is largely set on a plantation near Augusta that is now a state park, so I plan to soak up as much of the local color as I can.

I may also – fingers crossed! – be able to meet with some of the descendants of the enslaved on the plantation. I’m hoping to introduce myself to them, read them a chapter or two from the novel, and, if they are so inclined, hear some of their family stories.

Wish me luck!


The Intel/Micron shotgun divorce

by Robin Harris on Wednesday, 14 March, 2018

One of the oddest marriages in high tech is coming to an end. Intel and Micron have announced that they are going their separate ways on 3D Xpoint.

They’ll still share their Utah fab – joint custody – but Intel will no longer subsidize Micron’s R&D, and Micron will stop selling some products at cost to Intel. My guess: the financial changes are a wash.

The StorageMojo take
The divorce isn’t surprising. The two companies – though chip vendors – have very different business models.

Companies hook up for all sorts of reasons, some sound, some less so. But the Intel/Micron relationship got off to a weirder than usual start, with a hurried press event.

I’ve never seen a good explanation for the rushed announcement. My best guess is that Intel intended to help Micron ward off a takeover attempt. Whatever the motivation, that sort of thing doesn’t happen without top execs signing off, so whatever panicked them then is no longer a concern.

One reason for the diverging paths may be technical. Intel wants to sell 3D Xpoint storage – which is block based – and Micron wants to sell 3D Xpoint memory, which is byte addressable. That difference affects yields as well as marketing, since Intel wants to tie 3D Xpoint to its CPUs, and Micron wants as broad a memory market as possible.

All in all, the divorce is a Good Thing. It’s always better in the long run – and often in the short run – when companies deal with each other at arm’s length, pay reasonable prices, and control their own roadmaps.

While 3D Xpoint isn’t going to be the winner that Intel and Micron hoped, the hasty announcement helped break trail for other NVMs with better specs and, eventually, economics.

Courteous comments welcome, of course.


StorageMojo’s novel distraction

by Robin Harris on Friday, 23 February, 2018

Time has a way of speeding by, a fact I was reminded of when I looked at my last post and saw it has been 3 months since I’ve added anything to StorageMojo.

What gives?

Well, StorageMojo gave. I’ve been busy with ZDNet, but that’s nothing new, as I’ve been posting there for a decade (talk about time flying!).

I also consult, and have been especially busy since October, including work on a potential startup. My routine has also been disrupted since July by my recent move – all of 100 yards, but still – and getting settled in.

The big time sink
But the main distraction is that I’ve been working on a novel. Set in the dystopian past of the antebellum South, it concerns the life of an enslaved woman on the plantation of one of South Carolina’s wealthiest and powerful men. It’s historical fiction, meaning the main characters were real people, and the plot follows actual events.

Of course, enslaved women left few records of their lives, since teaching slaves to read and write was illegal. Fortunately, the last 20+ years has seen some excellent scholarship that draws on a wide range of sources, from the WPA’s collected narratives of former slaves, to the memoirs of Southern ladies, court records, newspaper accounts, and more.

I’m using the works of historians – and my active imagination – to try to create a more realistic view of plantation life than is commonly portrayed in popular media. So yes, there’s some grim moments, because slavery. The research has blown up more than one of my misconceptions about the “peculiar institution” and for that alone I’m thankful.

Pilgrim’s progress
The novel is up over 75,000 words, with probably 25,000 words to go. I’m sharing chapters with several writers groups here, and the work has been well received.

So my plan is to release the early chapters as audio files – read by the author! – here by mid-2018, to get feedback from StorageMojo’s discerning audience. I’ll warn readers that this is NOT what you’ve come to expect from StorageMojo, but I hope it will find favor among those interested in a story based on real events, or stuck in traffic on 101.

The StorageMojo take
However the novel proceeds, I’ve been having a blast writing this for the last two+ years. And in about 6 weeks I’m heading to South Carolina to visit the plantation where the people I write about lived. It’s amazing what a help the internet has been, but there’s no substitute for placing events in their physical setting.

So I apologize to readers that I’ve stepped away from StorageMojo for this while, but I hope at least some of you will agree that this project, when you hear it, was worth the detour. After that, it’s back to StorageMojo’s regularly unscheduled programming!

Courteous comments welcome, of course.


Panasas pushes on

by Robin Harris on Thursday, 16 November, 2017

Panasas has long been one of the most innovative storage companies – and the industry’s best kept secret. The latter fact is due to their focus on High Performance Computing (HPC), and a steadfast refusal to market themselves as “enterprise” storage.

So, yeah, it is a engineering company. But they keep turning the product development crank and growing their product capabilities.

Their latest announcement is a case in point. They have a new controller platform that is server, rather than blade, based. This enables them to put considerably more scale-up grunt behind their controller software.

Panasas has disaggregated their director software to run on commodity servers. That software can also be run on a cluster of at least 4 nodes for high availability and performance – with up to 360GB/sec of bandwidth.

The director software is not in the data path, but as Isilon users can attest, poor metadata handling can cripple nominally powerful storage controllers. Panasas maintains separate control and data planes to mitigate metadata performance issues.

Parallel file system
Panasas was also an early advocate for the NFS 4.1 parallel file access protocol. The PNFS standard was stillborn though, due to the reluctance of vendors who couldn’t take advantage of the extra performance declining to support it.

But the PNFS architecture lives on in Panasas Direct Flow, which the company has continued to develop. Direct Flow enables multiple 10gig or faster Ethernet links to act in parallel to speed large file transfers.

The StorageMojo take
There’s a lot more to the announcement, but the bottom line is that Panasas continues to innovate and push the envelope of high performance storage for HPC.

AI’s need for large training data sets is a natural for HPC storage. We’re at the beginning of a very interesting curve in high performance storage.

Courteous comments welcome, of course.


The limits of disaggregation

by Robin Harris on Wednesday, 30 August, 2017

Hyperconvergence – aka aggregation – is pushing scale-out architectures in one direction. But Rack Scale Design (RSD) – aka disaggregation – is pushing scale-out in another direction. And Composable Infrastructure is hoping to split the difference, with the power to define aggregations in software, rather than hardware.

But this continuum is not symmetrical on each end. We have a pretty good idea of what can be done with hyperconvergence – check out the growing vendors – but disaggregation is still mostly in the theory stage.

That’s why the recent paper, Understanding Rack-Scale Disaggregated Storage, by Sergey Legtchenko, Hugh Williams, Kaveh Razavi, Austin Donnelly, Richard Black, Andrew Douglas, Nathanaël Cheriere, Daniel Fryer, Kai Mast,
Angela Demke Brown, Ana Klimovic, Andy Slowey, and Antony Rowstron, of Microsoft Research, is so useful.

For the research, the authors developed an experimental research fabric, dubbed the Flexible Fabric to test four levels of disaggregation based on how often a reconfiguration is needed.

They levels are:

  • Complete disaggregation. Assumes any drive can be connected to any server on a per I/O basis. Most frequent reconfig.
  • Dynamic elastic disaggregation. Assumes drives will connect to servers for multiple I/Os, but that the number drives connected to any one server will vary over time.
  • Failure disaggregation. Reconfigure only on drive or server failures.
  • Configuration disaggregation. Reconfigure only during deployment, or if a rack is repurposed. Least frequent reconfiguration.

Flexible fabric
The team needed a fabric that could reconfigure in a millisecond to even get close to testing the complete disaggregation model. With SSDs capable of hundreds of thousands of IOPS, even a millisecond is much too long, but who can do better?

The paper describes the Flexible Fabric:

The core of the Flexible Fabric is a 160-port switch, which implements a circuit switch abstraction. The switch allows any port to be connected to any other port. When any two ports are connected, we refer to them as being mapped . . . . The switch supports both SAS and SATA PHYs and is transparent to all components connected to it.

The authors take pains to point out that the Flexible Fabric is a research tool and is not intended for production use. They would recommend against even attempting to use the architecture in any kind of production environment.

It’s a research tool, not a stalking horse for a new kind of fabric product.

In their research the team found some anomalies. They couldn’t use a modern PHY like SAS 3.0 because it does link quality scanning – a good thing – which makes set up time last as much as a second – a bad thing.

They also discovered that rapid and frequent drive switching crashed some host bus adapters. For the SATA configuration, they finally selected the Highpoint Rocket 640 Lite 4-port SATA 2.0 PCIe 2.0 controller.

Summary results

  • Complete disaggregation was killed by the overhead of rapid switching. Not a huge surprise.
  • Dynamic elastic disaggregation, where drives are connected to servers for minutes to hours at a time, proved to be technically viable, and potentially a boon for variable workloads.
  • Failure disaggregation also proved to be technically viable, and its use case – migrating drives from a failed server to minimize the network overhead of rebuilds – is definitely interesting.
  • Configuration disaggregation, where configurations are set at deployment, turned out to be a bust, because the flexibility and cost of the fabric didn’t provide a commensurate benefit.

The StorageMojo take
So the extremes aren’t interesting, at least given the issues with current technology. But that leaves a wide swath of possibilities for system architects to explore as RSD/disaggregation/composable infrastructure ideas gain steam.

Of course, now and always, reliability trumps flexibility. And there are, no doubt, many gremlins in dynamic disaggregation scenarios.

But greater disaggregation seems to be a secular trend due to the dissimilar rates of technological change in the underlying CPU, network, and storage technologies. Work like this paper helps sort out the issues.

Courteous comments welcome, of course.

{ 1 comment }

Eclipse 2017

August 22, 2017

Things have been a bit quiet here at Chez Mojo – at least on the publishing side. One the personal side I’ve been busy with a few things, one being a move to a new place. Not much of a move – about 100 yards as the crow flies – but packing isn’t much different […]

0 comments Read the full article →

How high redundancy can hurt availability

July 24, 2017

I wrote about how clouds fail on ZDNet today, but there was another wrinkle in the paper that I found interesting: high redundancy hurts. Counter intuitive? This comes from the paper Gray Failure: The Achilles’ Heel of Cloud-Scale Systems, by Peng Huang, Chuanxiong Guo, Lidong Zhou, and Jacob R. Lorch, of Microsoft Research, and Yingnong […]

8 comments Read the full article →

Hike blogging: 07-17-2017

July 17, 2017

Hike blogging has been on hiatus for several reasons, including no good pictures, packing up for a short move, too much rain – it’s monsoon time now – and I’ve been getting back to biking as well. But this morning got out at 630 on to the Twin Buttes/Hog Heaven/Hog Wash loop. It’s about 4.5 […]

0 comments Read the full article →

Flash Memory Summit next month

July 17, 2017

StorageMojo’s crack analyst team will be attending next months Flash Memory Summit. The dates are August 8-10, at the Santa Clara Convention Center. Wasn’t able to attend last year, but the 2015 summit was the best storage show I’d seen in years. Flash is where the action is, with NVRAM coming along as well. I’ve […]

0 comments Read the full article →

The moving target problem

July 11, 2017

With the news that Toshiba has developed 3D quad-level cell flash with 768Gb die capacity, I’m reminded of the moving target problem. This is a problem whenever a new technology seeks to carve out a piece of an existing technology’s market. Typically, a startup seeks funding based on producing a competitive product in, say, two […]

1 comment Read the full article →

Why startups fail

June 21, 2017

A great piece at CB Insights. They collected the failure stories of 101 startups and then broke those failures into 20 categories. Spoiler alert! Here are the top 10 reasons for failure, as compiled by CB Insights. What I find interesting is that 8 of the top 10 reasons are marketing related. No market need. […]

8 comments Read the full article →

A transaction processing system for NVRAM

June 19, 2017

Adapting to NVRAM is going to be a lengthy process. This was pointed out by a recent paper. More on that later. Thankfully, Intel wildly pre-announced 3D XPoint. That has spurred OS and application vendors to consider how it might affect their products. As we saw with the adoption of SSDs, it takes time to […]

2 comments Read the full article →

A distributed fabric for rack scale computing

June 12, 2017

After years of skepticism about rack scale design (RSD), StorageMojo is coming around to the idea that it could work. It’s still a lab project, but researchers are making serious progress on the architectural issues. For example, in a recent paper, XFabric: A Reconfigurable In-Rack Network for Rack-Scale Computers Microsoft Researchers Sergey Legtchenko, Nicholas Chen, […]

1 comment Read the full article →

Infinidat sweetens All Flash Array Challenge

June 6, 2017

In response to yesterday’s StorageMojo post on Infinidat, Brian Carmody of Infinidat tweeted: Robin, Verde Valley is a great organization. @INFINIDAT will donate $10K for every Infinidat Challenge customer who mentions your blog post. — Brian Carmody (@initzero) June 5, 2017 Thanks, Brian! The StorageMojo take Verde Valley Sanctuary is a fine organization that StorageMojo […]

0 comments Read the full article →