EMC’s DSSD hiring is exploding

by Robin Harris on Wednesday, 18 February, 2015

DSSD, the Valley startup acquired by EMC last year (see EMC goes all in with DSSD) is continuing to hire at an accelerating rate. Informed sources put the current DSSD team at 160 heads with plans to grow it to 800 over the next year.

This is a program in a hurry. Hiring such numbers in the very tight Silicon Valley job market today is expensive, where even new graduates command 6 figures – and need it to afford an apartment with only one roommate.

But given that VMAX needs a low latency backend to replace the uncompetitive VMAX disk infrastructure, the rush is understandable. Rotating media is no longer competitive with flash for enterprise applications and VMAX is feeling the heat.

DSSD is looking for hardware, datapath, firmware, management software and storage appliance software engineers. The jobs are in Menlo Park, a very nice and expensive place to live.

The StorageMojo take
DSSD has some very smart people behind it, including the ZFS crew and Andy Bechtolsheim. But the hiring plans suggest that integrating DSSD with VMAX is a much bigger job than originally expected.

That shouldn’t be a surprise. Besides the problem of building extremely high performance and highly reliable storage – a really hard problem even for experienced teams – lots of the disk-focused optimizations in current VMAX code will have to be surgically removed to achieve a significant VMAX speedup.

EMC will have to show customers significant performance boosts if they hope to earn higher margins on the refreshed all-flash VMAX. Excising decades of disk-oriented cruft is key and non-trivial.

Competitors touting IOPS today need to analyze EMC’s likely marketing direction with the new VMAX. Of course they’ll continue to push the “data services platform” line, but latency is the intended performance proof point.

Engineering schedules only slip one way. While EMC will tout “early access” – beta – programs late this year, don’t expect to see commercial shipments until mid-2016.

But once EMC gets it out the door, expect a full-on marketing onslaught for a flash-enhanced VMAX value proposition. With cloud providers taking significant enterprise storage capacity, EMC will be hungry for market share anywhere they can get it.

Courteous comments welcome, of course.

{ 6 comments }

Latency in all-flash arrays

by Robin Harris on Tuesday, 17 February, 2015

StorageMojo has been writing about latency and flash arrays for years (see The SSD write cliff in real life), with a focus on data from the TPC-C and SPC-1 benchmarks. The folks at Violin Memory asked me to create a Video White Paper to discuss the problem in a bite-size chunk.

Latency is the long pole
Steven Swanson’s and Adrian M. Caulfield’s work at the University of California San Diego found that with a 4Kbyte disk access, the standard Linux software stack accounted for just 0.3% of the latency and 0.4% of the energy consumption. With flash however, the same software stack accounted for 70% of the latency and 87.7% of the energy consumed.

Clearly, the software stack issue belongs to no single company. But array vendors can help by reducing the latency inside their products.

That’s why open and documented benchmarks are important. It is too bad that the erstwhile industry leader, EMC, doesn’t offer either benchmark, unlike other major vendors.

The StorageMojo take
The Violin engineering team has done an admirable job of reducing their array’s latency, as measured in TPC-C benchmarks. Not merely the average latency – which almost any flash array can keep under 1ms – but maximum latency as well, at 5ms or less.

Compare that to an early 2015 filing by a major storage company for their flagship flash storage array. The Executive Summary shows that the array’s average latency was under 1 second.

Impressive and reassuring. However, in the Response Time Frequency Distribution Data we see what the average response times don’t tell: millions of I/Os took over 5 seconds and hundreds took over 30 seconds – and perhaps much longer, since the SPC-1 report groups them all in one “over 30 second” bucket.

The basic insight of Statistical Process Control, that reduced component variability improves system quality, applies to computer systems as well. Reduced maximum latency and sustained IOPS are key metrics for improving system performance and availability.

Courteous comments welcome, of course. Violin paid StorageMojo to produce the video, however the opinions expressed are my own.

{ 3 comments }

EMC’s missing petabytes: the cost of short stroking

by Robin Harris on Tuesday, 10 February, 2015

A couple of weeks ago StorageMojo learned that a VMAX 20k could support up to 2400 3TB drives, it can only address ≈2PB. Where did the remaining 5 petabytes go?

Some theories were advanced in the comments, and I spoke to other people about the mystery. No one would speak on the record, but here’s the gist of the received wisdom.

Different strokes for different folks
Short stroking is the short and best answer. Short stroking uses the outermost tracks – the fastest, densest, and capacious tracks – to punch up drive performance.

By reducing head shift time and maximizing data transfer rates a short stroked drive gets more IOPS and faster transfers. Wonderful!

But at what cost? The 20k’s numbers serve as a first approximation.

Assuming a max’d out VMAX 20k, but using 80 SSDs, that leaves 2,320 3TB 3.5″ drives, for a raw capacity of 6,960TB. Assuming 8 drive RAID 6 LUNs we get a dual-parity protected capacity of 5,220TB. Taking EMC’s spec of an open system RAID 6 capacity of 2,067TB and dividing that by 5220 gives us 39.6% capacity efficiency, which would use roughly the outer 0.8″ of a 3.5″ platter. That would certainly improve IOPS and transfer rate.

Research indicates that the 2,320 disks are roughly half of the total BOM cost. Thus, if you pay $1.4m (not including software) for a fully loaded 20k, $700k goes for the raw 7PB. Since you only get 2PB usable, you are paying ≈$350k – depending on your discount, of course – per short stroked PB of capacity.

The StorageMojo take
We already knew traditional legacy arrays were expensive. What’s really interesting is that even with short stroking, 15k disks would be hard-pressed to do more than 600 IOPS each, or, generously, 1.4m IOPS. EMC promises “millions of IOPS” from the 20k, so even with short-stroking, it’s likely that much of the system’s total performance comes from its caching and SSDs rather than the costly short stroked disks.

Before you buy your next VMAX or other legacy architecture disk array, take a hard look at the cost of short stroked disks. You can do much better, with less complexity, with an all-flash solution at an equal or lower cost. Not to mention the lower OpEx from reduced floor space, power, cooling and maintenance.

Courteous comments welcome, of course. EMC’ers and others are welcome to offer their perspectives to this analysis. Update: Note that the missing petabytes come with using 7200RPM drives, not 15k drives. End update.

{ 6 comments }

Help StorageMojo find the VMAX 20k’s lost petabytes!

by Robin Harris on Wednesday, 21 January, 2015

While working on a client configuration for a VMAX 20k – and this may apply to the 40k as well, as I haven’t checked – I encountered something odd: The 20k supports up to 2400 3TB drives, according to the EMC 20k spec sheet. That should be a raw capacity of 7.2PB

However, the same spec sheet appears to say that the maximum usable storage capacity – after taking care of formatting overhead, system needs, etc. is ≈2PB. Here’s the table from the spec sheet:

20k spec sheet

Coming at it from other directions, it seems that since the 20k supports four 60 drive enclosures per rack, and supports 10 drive racks – with a lot of daisy-chaining – you can indeed connect 2400 3.5″ SAS drives. So where us the discrepancy?

Try as I might I can’t reconcile it. Simple spec sheet error? I can’t read? Are large capacity drives short stroked? Is the NSA using the rest? What’s up?

I know there’s a lot of EMC expertise out there, so please, enlighten me!

The StorageMojo take
I’ve been looking at a number of systems this week. As far as large vendor product info goes, EMC and NetApp are the least forthcoming, HP the most, and HDS in the middle.

Normally withholding information from customers is meant to ensure a call to their friendly Sales Engineer, but perhaps it is to imbue passivity into customers. “Boy, I can’t figure this out. Why even try?”

Perhaps more on this topic later. I’ve also thought of twist on the Price Lists that may justify un-deprecating them. Your thoughts?

Courteous comments welcome, of course. Update: If I were in competitor product marketing, especially for flash products, I’d be updating my “flash is competitive with disk” slides tonight. End update.

{ 4 comments }

Facebook on disaggregation vs. hyperconvergence

by Robin Harris on Tuesday, 6 January, 2015

Just when everyone agreed that scale-out infrastructure with commodity nodes of tightly-coupled CPU, memory and storage is the way to go, Facebook’s Jeff Qin, a capacity management engineer – in a talk at Storage Visions 2015 – offers an opposing vision: disaggregated racks. One rack for computes, another for memory and a third – and fourth – for storage.

The rationale: applications need different amounts of each resource over time. Having thousands of similarly configured servers ignores this fact and leads to substantial – at FB scale – waste.

They’ve also found that different components reach functional obsolescence at different rates. Refreshing hardware at the rack level is simpler than opening thousands of servers and replacing dusty bits.

Enabling this dramatic change is their new network. No details on this network, but it must offer high bandwidth and extraordinarily low latency.

Another rack resource coming soon: optical cold storage racks starting at 1PB and expected to go to 3-4PB with the advent of 400GB optical discs.

The StorageMojo take
Holy disaggregation, Batman! The hooded crusaders at Facebook are roaring out the Zuckcave with architectures blazing. Maybe hyperscale is even odder than we imagined.

What does this mean for the rest of us? A first approximation: very little.

Facebook is an amalgam of services with very different requirements: instant messaging; friend news feeds; gaming; video; long-term photo storage; and oodles of advertising and user tracking.

An Amazon home page draws on over 100 distributed asynchronous services, but the focus is your shopping cart and payments. Facebook is, in comparison, a realtime feed mashed up with a massive personal archive.

Facebook is popular culture and its application resource requirements reflect that. Apps, like memes or fads, ebb and flow with user’s whims. Search, by contrast, is almost static.

To the extent that there is a larger lesson, it’s the network that FB has designed. If they can actually make disaggregation work the network is key.

The advantages of stripped down, warehouse-optimized LANs recall the earlier battle between RISC and CISC in CPUs. Simpler, cheaper and faster vs complex, costly and slower.

That is an idea with legs.

Courteous comments welcome, of course. As is traditional, Internet access at CES is spotty.

{ 13 comments }

Friday hike blogging: Highline trail

by Robin Harris on Friday, 19 December, 2014

Hike blogging got interrupted by some schedule conflicts and bad weather. Federal district court jury duty was one interruption. Some needed rain and cold temperatures was another.

To make up for the lapse, here are two pictures from this week’s hike around Cathedral Rock.

Winter is one of my favorite seasons because of the clouds and, sometimes, fog on the rocks. Here’s a shot from early in the hike, looking east to the Mogollon Rim:

[click to enlarge]

[click to enlarge]

As the 4+ hour hike progressed, the weather improved. This view of Cathedral Rock is looking northwest from the Highline trail in the last hour of the hike.

[click to enlarge]

[click to enlarge]

Planning to take Bo and Tess on a walk later today. And if the weather cooperates hoping to hike the spectacular Hangover trail on Tuesday.

{ 0 comments }

WD acquires Skyera: whoa!

December 18, 2014

The Skyera acquisition could signal a sea change in the relationship between storage device and system makers. It is overdue. Traditionally, device makers avoided competing with their customers. This is what it made Seagate’s acquisition of Xiotech (now X-IO years ago so surprising. StorageMojo was critical of Seagate’s Xiotech acquisition because there were large and […]

9 comments Read the full article →

The 30% solution

December 2, 2014

The existential battle facing legacy storage vendors is about business models, not technology or features. Market forces – cloud providers and SMBs – are trying to turn a high margin business into a much lower margin business. We have already seen this happen with the servers. Many large minicomputer companies enjoyed 60 to 70% gross […]

6 comments Read the full article →

Friday hike blogging: Cockscomb Butte

November 21, 2014

Thanks to re:Invent and some other issues I haven’t done much hiking in the last 2 weeks. But back on the 8th I took a hike with my friend Gudrun up Cockscomb. There’s no official Forest Service trail so I was glad to have a guide for my first ascent. We spent about 90 minutes […]

0 comments Read the full article →

Primary Data takes on the enterprise

November 21, 2014

The economics of massive scale-out storage systems has thrown a harsh light on legacy enterprise storage. Expensive, inflexible, under-utilized data silos are not what data intensive enterprises need or – increasingly – can afford. That much is obvious to any CFO who can read Amazon Webe Services’ pricing. But how to get from today’s storage […]

2 comments Read the full article →

EMC’s re-intermediation strategy

November 18, 2014

EMC – and other legacy array vendors – are trying to become an intermediary between enterprises and the cloud. Are cloud-washed arrays a viable strategy? EMC’s Joe Tucci is working to ensure that EMC can survive in a cloud world even if he doesn’t manage to sell the company before he retires. The acquisitions of […]

2 comments Read the full article →

AWS re:Invent this week

November 11, 2014

StorageMojo’s legion of analysts are saddling up for the ride to Las Vegas for AWS re:Invent. This is a first time for this event and trust it will be excellent. If you’re there feel free to say hi! Courteous comments welcome, of course. Please comment on anything you’d like to see The StorageMojo take on.

0 comments Read the full article →

Pure vs EMC: who’s winning?

November 11, 2014

Forbes contributor and analyst Peter Cohan writes on seemingly conflicting stories coming from Pure and EMC. Let’s unpack the dueling narratives. Does Pure win 70% vs EMC or does EMC win 95% vs Pure? The metrics: Both parties seem to agree that they meet up very often in competitive bidding situations. EMC claims that it […]

10 comments Read the full article →

Friday hike blogging: clouds and sun

November 7, 2014

I’ve been hiking regularly, but not posting Fridays due to some schedule conflicts. Last Saturday I took one of my favorite loops, but started from Soldiers Pass rather than Mormon Canyon, and headed counterclockwise. The weather was unsettled, with dark clouds to the north and east and broken clouds to the west. As it was […]

0 comments Read the full article →

Mark Lewis on Formation’s enterprise play

November 6, 2014

Formation Data Systems announced a soft launch a few weeks ago with a $24M round – hefty for a software play – and one of the investors is Kumar Malavalli, the smart guy behind Brocade. StorageMojo spoke to FDS CEO Mark Lewis. The what Formation is focused on attacking the cost and expense of enterprise […]

0 comments Read the full article →