EMC II’s ragged last quarter

by Robin Harris on Monday, 27 April, 2015

As reported in a Seeking Alpha quarterly call transcript, EMC’s storage unit had a $75 million shortfall in Q1.

CEO Joe Tucci said

. . . we were disappointed that we fell a bit short of our Q1 revenue plan, approximately $75 million short. This $75 million revenue shortfall occurred in our storage business. That said, our storage product backlog did increase by $100 million year-on-year, right on plan.

Why?
Tucci again:

About two-thirds of this miss was due to the fact that we didn’t execute as crisply as we normally do. The other third was due to negative geopolitical effects in Russia and China that slowed down bookings.

He also cited CIO’s changing priorities: a focus on enterprise-wide “digitization” and cyber security. As a result:

. . . CIOs are searching for ways to reduce cost and increase efficiencies in their current infrastructures and legacy applications, while maintaining and even enhancing overall quality and performance.

Reducing costs for current infrastructure is a big problem for legacy vendors, most of whom are locked into high-gross margin business models that guarantee customers high costs and frequent forklift refreshes. EMC’s Information Infrastructure business knows that very well, as they laid off 1500 employees this quarter.

Evidently that layoff hit muscle and bone as well as fat, according to David Goulden, EMC II CEO. About a third of the $75 million storage shortfall was attributed to the layoffs:

In the course of the first quarter inside EMC information infrastructure, we reduced about 1,500 positions. Obviously when you do something of that size, it did impact some of our field-facing parts of the business and that caused a bit of a slowdown in terms of how quickly we got out of the gate in the first quarter on the go-to-market side.

But VMAX customers may not welcome Goulden’s (old) news that EMC is milking VMAX to fund new business:

. . . the remaining factor, which was again one-third of the miss, relates to the cost optimization cost management we’ve been doing inside of the company. As you know, we’re aggressively driving cost in our traditional business to fund investments in our newer businesses.

Good news – for EMC – on XtremeIO
Goulden expects that XtremIO will reach $1 billion in revenue this year, at excellent gross margins.

Actually the XtremeIO gross margin profile is actually quite attractive for a couple of reasons. First of all, it sold as an appliance and hardware/software combined. Secondarily, because it’s an appliance model, we start charging maintenance on basically day one for that product. So we don’t have to accrue for a multiyear warranty. So over the life cycle of an XtremIO system, the gross margin profile is actually quite good and very comparable to our traditional VMAX, VNX blended margins.

Goulden on DSSD

First of all, DSSD coming to market later this year, we couldn’t be more excited about that. I did mention though I expect to see some exciting news around XtremIO at EMC World as well. So the flash agenda is alive, well and strong, and they’re positioned quite differently. Basically XtremIO is a SAN attached Block Storage Device designed to make existing applications run much, much faster. DSSD is aimed more at the NextGen in-memory third platform applications. It’s well-handled. Protocols like Hadoop as well as the native protocols designed to be basically an extension of memory and the applications that we’re using the assistance for are actually quite different. So they’re both based upon flash technology, but the applications they are aimed for are different and quite complementary. So couldn’t be more excited about the flash agenda. You’ll hear a lot more about XtremIO in two weeks time, a little bit more about DSSD as well.

The StorageMojo take
The analysts weren’t happy about EMC’s failure to meet or beat the Street EPS expectation of 36¢, but EMC toughed it out, saying that their 31¢ met “internal expectations.”

I wonder about that $75m shortfall, of which some 2/3rds evidently migrated to the “expected” $100m backlog. If they were expecting $100m, why didn’t the orders shifted to this quarter increase it to $150m?

On the DSSD front EMC seems to be confident that DSSD is coming out this year, and since it is almost May, they should have a good idea by now. But, of course, EMC’s definition of “coming to market” is fluid: it could mean beta; or announce; or, least likely, shipping v1.0 product for revenue. (For more StorageMojo analysis of DSSD see What is DSSD building?, EMC goes all in with DSSD and EMC’s DSSD FUD.)

As the largest and most diversified storage vendor, EMC is admirably positioned to survive the current tsunami of new technologies and companies, unlike some of their less well-positioned competitors (see How doomed is NetApp?). But they are counting on being able to “brand” commodity hardware with a combination of proprietary and open-source software while maintaining 60-70% gross margins.

This may work for a while, but ultimately scale-out object storage will win file-based workflows. The first vendor to offer a scale-out software solution that doesn’t require costly hand-holding to install and manage will hit the Pivotal business model hard.

This game isn’t over by a long shot.

Courteous comments welcome, of course. I’m probably one of the only – if not the only – analyst who hasn’t taken a freebie from EMC in years. Boo-hoo!

{ 2 comments }

How doomed is NetApp?

by Robin Harris on Monday, 13 April, 2015

The current turmoil caused by plummeting cloud storage costs, new entrants sporting modern architectures and the forced re-architecting due to flash and upcoming NV memories is a perfect storm for legacy vendors. Some are handling it better than others, but some, like IBM and NetApp, appear to be sinking.

NetApp is signalling that their 2015 sales may not reach expected levels – not a surprise – and that more layoffs – on top of the ≈1500 in the last couple of years – could come soon.

The latest troubling sign from NetApp is the failure of their FlashRay project to ship a competitive product. The VP in charge, Brian Pawlowski, left NetApp for Pure Storagelast month and the company folded the development effort into the ONTAP team.

According to press reports, the FlashRay project started almost 3 years ago, but has yet to ship a competitive product, despite the efforts of 100+ engineers. Given that there’s nothing FlashRay was supposed to do that was terribly novel, the problems are likely political than technical, a conclusion reinforced by placing FlashRay under the ONTAP team.

NetApp has well-known problems integrating acquisitions into their products. Now it seems they have problems developing new products as well. Not promising, given the threatening secular trends pinching their core business.

The secular trends include:

File server obsolescence. NetApp’s original raison d’etre is no longer state of the art – or all that interesting. File servers were a great idea 25 years – like RAID – but newer technology – object storage – is looking to replace them.

Cloud encroachment. When arrays were the only game in town, customers had to buy another when they needed capacity. But now old and rarely used files are moving to the cloud – and the repeat business with them.

Margin pressure. The combination of object and cloud storage – both much less costly than traditional arrays – is waking up customers and producing buying resistance.

A recent note from a disgruntled NetApp customer highlights this issue [edited for length]:

I’m currently engaged in replacing a FAS3240 going off maintenance with a newer version because Netapp offered a deal I couldn’t refuse. . . . [W]hile the drives and shelves are at least not inflated to ridiculous multiples (Why the heck am I being charged even a penny over retail?) they’ve now instituted a $34/TB surcharge to “license” me to use the very storage hardware I just bought from them.

. . . I am incensed at their audacity to charge me a FAT 60-70% margin on commodity hardware. . . .

The reason that customer feels he is being gouged is that the direct sales model is expensive. You pay for your salesman whether you like him or not – and he’s paid a lot more than your local Best Buy clerk.

The StorageMojo take
Bulk file storage hardware is commoditized. The upstart vendors are virtually all software-only, offering tin-wrapped software through resellers rather than direct sales, which saves a lot of margin dollars.

NetApp is in the position that DEC was in the ’90s, where commodity servers eviscerated the VAX business, aided by supply chain innovations by Dell and others.

Servers became a box to be ordered over the phone, not painstakingly configured with a sales engineer and delivered in a few months. Storage is finally following suit. And it doesn’t look good for NetApp.

Courteous comments welcome, of course.

{ 23 comments }

EMC’s DSSD hiring is exploding

by Robin Harris on Wednesday, 18 February, 2015

DSSD, the Valley startup acquired by EMC last year (see EMC goes all in with DSSD) is continuing to hire at an accelerating rate. Informed sources put the current DSSD team at 160 heads with plans to grow it to 800 over the next year.

This is a program in a hurry. Hiring such numbers in the very tight Silicon Valley job market today is expensive, where even new graduates command 6 figures – and need it to afford an apartment with only one roommate.

But given that VMAX needs a low latency backend to replace the uncompetitive VMAX disk infrastructure, the rush is understandable. Rotating media is no longer competitive with flash for enterprise applications and VMAX is feeling the heat.

DSSD is looking for hardware, datapath, firmware, management software and storage appliance software engineers. The jobs are in Menlo Park, a very nice and expensive place to live.

The StorageMojo take
DSSD has some very smart people behind it, including the ZFS crew and Andy Bechtolsheim. But the hiring plans suggest that integrating DSSD with VMAX is a much bigger job than originally expected.

That shouldn’t be a surprise. Besides the problem of building extremely high performance and highly reliable storage – a really hard problem even for experienced teams – lots of the disk-focused optimizations in current VMAX code will have to be surgically removed to achieve a significant VMAX speedup.

EMC will have to show customers significant performance boosts if they hope to earn higher margins on the refreshed all-flash VMAX. Excising decades of disk-oriented cruft is key and non-trivial.

Competitors touting IOPS today need to analyze EMC’s likely marketing direction with the new VMAX. Of course they’ll continue to push the “data services platform” line, but latency is the intended performance proof point.

Engineering schedules only slip one way. While EMC will tout “early access” – beta – programs late this year, don’t expect to see commercial shipments until mid-2016.

But once EMC gets it out the door, expect a full-on marketing onslaught for a flash-enhanced VMAX value proposition. With cloud providers taking significant enterprise storage capacity, EMC will be hungry for market share anywhere they can get it.

Courteous comments welcome, of course.

{ 7 comments }

Latency in all-flash arrays

by Robin Harris on Tuesday, 17 February, 2015

StorageMojo has been writing about latency and flash arrays for years (see The SSD write cliff in real life), with a focus on data from the TPC-C and SPC-1 benchmarks. The folks at Violin Memory asked me to create a Video White Paper to discuss the problem in a bite-size chunk.

Latency is the long pole
Steven Swanson’s and Adrian M. Caulfield’s work at the University of California San Diego found that with a 4Kbyte disk access, the standard Linux software stack accounted for just 0.3% of the latency and 0.4% of the energy consumption. With flash however, the same software stack accounted for 70% of the latency and 87.7% of the energy consumed.

Clearly, the software stack issue belongs to no single company. But array vendors can help by reducing the latency inside their products.

That’s why open and documented benchmarks are important. It is too bad that the erstwhile industry leader, EMC, doesn’t offer either benchmark, unlike other major vendors.

The StorageMojo take
The Violin engineering team has done an admirable job of reducing their array’s latency, as measured in TPC-C benchmarks. Not merely the average latency – which almost any flash array can keep under 1ms – but maximum latency as well, at 5ms or less.

Compare that to an early 2015 filing by a major storage company for their flagship flash storage array. The Executive Summary shows that the array’s average latency was under 1 second.

Impressive and reassuring. However, in the Response Time Frequency Distribution Data we see what the average response times don’t tell: millions of I/Os took over 5 seconds ms and hundreds took over 30 seconds ms – and perhaps much longer, since the SPC-1 report groups them all in one “over 30 second ms” bucket.

The basic insight of Statistical Process Control, that reduced component variability improves system quality, applies to computer systems as well. Reduced maximum latency and sustained IOPS are key metrics for improving system performance and availability.

Courteous comments welcome, of course. Violin paid StorageMojo to produce the video, however the opinions expressed are my own.

{ 3 comments }

EMC’s missing petabytes: the cost of short stroking

by Robin Harris on Tuesday, 10 February, 2015

A couple of weeks ago StorageMojo learned that a VMAX 20k could support up to 2400 3TB drives, it can only address ≈2PB. Where did the remaining 5 petabytes go?

Some theories were advanced in the comments, and I spoke to other people about the mystery. No one would speak on the record, but here’s the gist of the received wisdom.

Different strokes for different folks
Short stroking is the short and best answer. Short stroking uses the outermost tracks – the fastest, densest, and capacious tracks – to punch up drive performance.

By reducing head shift time and maximizing data transfer rates a short stroked drive gets more IOPS and faster transfers. Wonderful!

But at what cost? The 20k’s numbers serve as a first approximation.

Assuming a max’d out VMAX 20k, but using 80 SSDs, that leaves 2,320 3TB 3.5″ drives, for a raw capacity of 6,960TB. Assuming 8 drive RAID 6 LUNs we get a dual-parity protected capacity of 5,220TB. Taking EMC’s spec of an open system RAID 6 capacity of 2,067TB and dividing that by 5220 gives us 39.6% capacity efficiency, which would use roughly the outer 0.8″ of a 3.5″ platter. That would certainly improve IOPS and transfer rate.

Research indicates that the 2,320 disks are roughly half of the total BOM cost. Thus, if you pay $1.4m (not including software) for a fully loaded 20k, $700k goes for the raw 7PB. Since you only get 2PB usable, you are paying ≈$350k – depending on your discount, of course – per short stroked PB of capacity.

The StorageMojo take
We already knew traditional legacy arrays were expensive. What’s really interesting is that even with short stroking, 15k disks would be hard-pressed to do more than 600 IOPS each, or, generously, 1.4m IOPS. EMC promises “millions of IOPS” from the 20k, so even with short-stroking, it’s likely that much of the system’s total performance comes from its caching and SSDs rather than the costly short stroked disks.

Before you buy your next VMAX or other legacy architecture disk array, take a hard look at the cost of short stroked disks. You can do much better, with less complexity, with an all-flash solution at an equal or lower cost. Not to mention the lower OpEx from reduced floor space, power, cooling and maintenance.

Courteous comments welcome, of course. EMC’ers and others are welcome to offer their perspectives to this analysis. Update: Note that the missing petabytes come with using 7200RPM drives, not 15k drives. End update.

{ 6 comments }

Help StorageMojo find the VMAX 20k’s lost petabytes!

by Robin Harris on Wednesday, 21 January, 2015

While working on a client configuration for a VMAX 20k – and this may apply to the 40k as well, as I haven’t checked – I encountered something odd: The 20k supports up to 2400 3TB drives, according to the EMC 20k spec sheet. That should be a raw capacity of 7.2PB

However, the same spec sheet appears to say that the maximum usable storage capacity – after taking care of formatting overhead, system needs, etc. is ≈2PB. Here’s the table from the spec sheet:

20k spec sheet

Coming at it from other directions, it seems that since the 20k supports four 60 drive enclosures per rack, and supports 10 drive racks – with a lot of daisy-chaining – you can indeed connect 2400 3.5″ SAS drives. So where us the discrepancy?

Try as I might I can’t reconcile it. Simple spec sheet error? I can’t read? Are large capacity drives short stroked? Is the NSA using the rest? What’s up?

I know there’s a lot of EMC expertise out there, so please, enlighten me!

The StorageMojo take
I’ve been looking at a number of systems this week. As far as large vendor product info goes, EMC and NetApp are the least forthcoming, HP the most, and HDS in the middle.

Normally withholding information from customers is meant to ensure a call to their friendly Sales Engineer, but perhaps it is to imbue passivity into customers. “Boy, I can’t figure this out. Why even try?”

Perhaps more on this topic later. I’ve also thought of twist on the Price Lists that may justify un-deprecating them. Your thoughts?

Courteous comments welcome, of course. Update: If I were in competitor product marketing, especially for flash products, I’d be updating my “flash is competitive with disk” slides tonight. End update.

{ 4 comments }

Facebook on disaggregation vs. hyperconvergence

January 6, 2015

Just when everyone agreed that scale-out infrastructure with commodity nodes of tightly-coupled CPU, memory and storage is the way to go, Facebook’s Jeff Qin, a capacity management engineer – in a talk at Storage Visions 2015 – offers an opposing vision: disaggregated racks. One rack for computes, another for memory and a third – and […]

13 comments Read the full article →

Friday hike blogging: Highline trail

December 19, 2014

Hike blogging got interrupted by some schedule conflicts and bad weather. Federal district court jury duty was one interruption. Some needed rain and cold temperatures was another. To make up for the lapse, here are two pictures from this week’s hike around Cathedral Rock. Winter is one of my favorite seasons because of the clouds […]

0 comments Read the full article →

WD acquires Skyera: whoa!

December 18, 2014

The Skyera acquisition could signal a sea change in the relationship between storage device and system makers. It is overdue. Traditionally, device makers avoided competing with their customers. This is what it made Seagate’s acquisition of Xiotech (now X-IO years ago so surprising. StorageMojo was critical of Seagate’s Xiotech acquisition because there were large and […]

9 comments Read the full article →

The 30% solution

December 2, 2014

The existential battle facing legacy storage vendors is about business models, not technology or features. Market forces – cloud providers and SMBs – are trying to turn a high margin business into a much lower margin business. We have already seen this happen with the servers. Many large minicomputer companies enjoyed 60 to 70% gross […]

6 comments Read the full article →

Friday hike blogging: Cockscomb Butte

November 21, 2014

Thanks to re:Invent and some other issues I haven’t done much hiking in the last 2 weeks. But back on the 8th I took a hike with my friend Gudrun up Cockscomb. There’s no official Forest Service trail so I was glad to have a guide for my first ascent. We spent about 90 minutes […]

0 comments Read the full article →

Primary Data takes on the enterprise

November 21, 2014

The economics of massive scale-out storage systems has thrown a harsh light on legacy enterprise storage. Expensive, inflexible, under-utilized data silos are not what data intensive enterprises need or – increasingly – can afford. That much is obvious to any CFO who can read Amazon Webe Services’ pricing. But how to get from today’s storage […]

2 comments Read the full article →

EMC’s re-intermediation strategy

November 18, 2014

EMC – and other legacy array vendors – are trying to become an intermediary between enterprises and the cloud. Are cloud-washed arrays a viable strategy? EMC’s Joe Tucci is working to ensure that EMC can survive in a cloud world even if he doesn’t manage to sell the company before he retires. The acquisitions of […]

2 comments Read the full article →

AWS re:Invent this week

November 11, 2014

StorageMojo’s legion of analysts are saddling up for the ride to Las Vegas for AWS re:Invent. This is a first time for this event and trust it will be excellent. If you’re there feel free to say hi! Courteous comments welcome, of course. Please comment on anything you’d like to see The StorageMojo take on.

0 comments Read the full article →

Pure vs EMC: who’s winning?

November 11, 2014

Forbes contributor and analyst Peter Cohan writes on seemingly conflicting stories coming from Pure and EMC. Let’s unpack the dueling narratives. Does Pure win 70% vs EMC or does EMC win 95% vs Pure? The metrics: Both parties seem to agree that they meet up very often in competitive bidding situations. EMC claims that it […]

11 comments Read the full article →