IBM’s Q1 storage pain – amputation imminent

by Robin Harris on Thursday, 17 April, 2014

When my children were little and cried due to a scraped knee, I would examine it gravely and then say: “This looks bad! Time to amputate!”

That quieted them right down and, to my relief, they’ve grown up to be functional human beings working – of all things – in high tech.

But that was a joke. With IBM storage I wouldn’t be joking.

Amputation is imminent. And needed to keep shareholders happy.

According to Storage Newsletter, IBM storage revenue is in an accelerating slide:

Screen Shot 2014-04-17 at 5.10.59 PM

The StorageMojo take
My, that looks grim. But why?

The sale of the IBM’s server business was a wake up call to dyed-blue customers that their usual buying choices weren’t going to look too good at the next annual review. So they started looking elsewhere. As did IBM sales.

In the short term good for EMC – who seems to be sweeping up most of the lost market share of the 7 dwarves – but in the longer term even better for upstarts like Nimble, Nutanix and Coho Data who offer modern architectures and strong value props.

With the low-end server business gone, management’s spotlight falls on storage. As StorageMojo noted a year ago:

IBM‘s storage business is at risk. While they have great technology, none of their hardware products are even a strong #2. Two thirds of IBM’s Systems and Technology group business is servers, and it is clear that IBM management isn’t happy with how some product lines are performing.

They’re less happy today.

Courteous comments welcome, of course. And to think that 20 years ago IBM dominated the storage industry.

{ 3 comments }

EMC gets the Cisco Whiptail lash

by Robin Harris on Tuesday, 15 April, 2014

We knew this moment would come.

According to William Blair, a British broker, their data networking and storage tracker (as quoted by the most excellent Chris Mellor in The Reg) sees that

Despite Cisco’s public commentary about not wanting to leverage its Whiptail acquisition (renamed Invicta) as a stand-alone enterprise storage system, our industry contacts indicate that the Cisco salesforce is enthusiastically selling the all-flash-array as a stand-alone platform and competing directly against vendors such as Pure and EMC/XtremIO in the field.

Cisco is also bundling the platform with its UCS server line as a converged infrastructure play. Over time, we expect Cisco to attempt to displace both EMC/Vblock and NetApp/Flexpod deployments with its UCS/Whiptail converged offering as the company aims to address new markets and find growth.

Color me not surprised.

The StorageMojo take
Cisco’s UCS never made sense – due to low server margins – unless they also went into storage, where the margins recall mainframe days of yore. Many expected Cisco to buy NetApp or EMC 5 or 6 years ago, but no, they went with UCS and VCE.

Now they’re decoupling from VCE. Cisco corporate will likely deny this for some time, but the proof is in the sales force compensation. If Cisco sales gets bigger commissions selling Whiptail then all the corporate mellow-tone is background music for the VCE wake.

When I worked for EMC’s largest reseller 15 years ago I saw how deftly this worked. Corporate laid out very neat and clear guidelines for their sales and our sales. Except when their sales ignored the guidelines two things happened: EMC wouldn’t return our calls; and then they’d explain that the local offices had a lot of latitude and what could they do?

Looks like EMC is going to get a taste of their own medicine – and not a moment too soon.

Courteous comments welcome, of course. Cisco is hardly a classic underdog, but I’m rooting for them. What do you think?

{ 1 comment }

EMC up to old tricks

by Robin Harris on Tuesday, 15 April, 2014

EMC World is in Las Vegas this year, a short drive from the StorageMojo Global HQ. Requested a press pass in February.

No response.

Repeated request a couple of weeks ago and got this answer:

At this time the press and analyst programs are at full capacity and we are not able to accommodate any additional requests.

After all, there are only thousands of people attending. StorageMojo = back-breaking straw. Or persona non grata?

Whatever.

The StorageMojo take
When it comes to petty behavior, EMC has never shaken its testy South Boston ways. I’ve often thought – hoped – that the influx of adults from other companies would calm them down. Evidently not.

Ever since StorageMojo twitted an EMC VP and in return got a nasty cease and desist letter a couple of weeks later, I knew they had a thin-skinned culture. But a $23 billion company should have a little more courage – and grace.

But maybe they have more to be thin-skinned about than I know. After all

  • RSA has been hacked and linked to NSA skullduggery.
  • VMware accounts for most of EMC’s market cap, meaning Wall Street discounts EMC’s core storage business.
  • VNX business faces two huge challenges: XtremeIO; and every other all-flash array. Either will hurt revenues.
  • Speaking of all-flash arrays, “partner” Cisco is flogging Whiptail hard as part of UCS. That’s gotta hurt!
  • Pivotal is a big internally developed bet – something that historically EMC has not been good at – that looks to be caught in the firefight between AWS and Google.

None of these are fun. Together maybe things look more dire inside EMC than we know.

Courteous comments welcome, of course.

{ 0 comments }

Avere makes cloud NFS fast & safe for the enterprise

by Robin Harris on Tuesday, 8 April, 2014

This is big. In new SPECsfs2008_nfs.v3 tests Avere’s FXT 3800 clustered edge filers achieved something remarkable: performance using Cleversafe and Amazon S3 cloud backends that were every bit as good as local backing stores.

Avere tested 4 systems:

Avere architecture in brief
From a StorageMojo post a year ago, here’s a refresher on Avere’s salient features:

  • Avere’s appliance is a read and write cache, so hot data I/O is handled directly and not routed to the backend filers. Typically only 1 out of 50 I/Os leave Avere for backend NAS, and for some workloads it is as little as 1 out of 200.
  • Avere’s file system is the client of the backend filers, so it always knows where the data is. Furthermore, Avere is certified with vendors like NetApp to handle the inevitable corner cases.
  • The system moves data across 4 tiers – DRAM, SSD, SAS and the backend filers – to achieve high performance, unlike products that rely on fast backend performance.
  • They also manage blocks within files, so a change in a file doesn’t require rewriting the entire file, a popular feature in large file applications.

Results
Each of the 3-node clusters achived strikingly similar results: ≈180,000 ops/sec with an overall response time of less than 0.9msec. Remember, two of those configs are using different cloud backends, while one use local storage.

The 32-node cluster using the simulated WAN round-trip 150ms latency achieved over 1,100,000 ops/sec with 1.3ms overall response time.

These results are among the highest ops/sec and lowest ORT of any on the SPEC nfs tests.

The StorageMojo take
Avere not only makes cloud object stores fast enough for enterprise use, they also make it safe enough. Those massive object stores rely on something called eventual consistency – where “eventual” is not defined – which means active files could be borked if an old copy is retrieved after a newer copy has been written but not fully disseminated.

Avere eliminates that problem because they keep the latest version at least until the cloud is consistent. Active files will be served by Avere for maximum performance.

Avere is also cloud-agnostic if they support S3 semantics. With a few mouse clicks you can start moving from one provider to another. No more Nirvanix-type surprises.

EMC and NetApp are no doubt hoping no one notices Avere’s results. But with the recent cloud price wars every enterprise needs to go back and check out the economics of Avere + cloud. You can be sure your CFO has noticed.

Courteous comments welcome, of course. Saw the Avere folks at NAB 2014 yesterday. They seemed pretty chipper.

{ 0 comments }

Off to NAB 2014 – and beyond!

by Robin Harris on Sunday, 6 April, 2014

As a reward for good behavior StorageMojo’s top analysts will be attending NAB for a couple of days. Today the HP briefing this afternoon and the ShowStoppers official press event this evening.

Then it is a very full day Monday – mostly in the lower South Hall – reviewing the latest in media storage presented at the show. Stay tuned here and on ZDNet.

Then its back to the lonely and lovely high desert mountains of Northern Arizona. See picture above.

Feel free to say hi.

The StorageMojo take
CES is a toy store for consumers. NAB is a toy store for media professionals.

StorageMojo likes ‘em both.

Consumerization of media production is in full swing. Nobody has yet shot a feature-length movie – can’t accurately say film anymore, can we? – on an iPhone, but somebody, somewhere, is surely planning one.

At the same time the widespread adoption of 4k video capture is stressing workflows and production infrastructure as never before. Storage companies are loving the 4x capacity requirements of 4k, and it is driving the adoption of Thunderbolt peripherals – storage, docks, ingest – as people move traditional post-production tasks into the production workflow.

A quad-core Macbook Pro with an AJA Io 4k with a Thunderbolt SSD array is a production and editing tool that you can carry in a briefcase. Amazing.

Courteous comments welcome, of course.

{ 0 comments }

A3Cube’s cluster architecture

by Robin Harris on Monday, 31 March, 2014

The transition to a storage-centric world continues. Billions of internet devices are driving exponential scale-up challenges. A3Cube’s Massively Parallel Data Processor (MPDP) may be the most comprehensive response yet to that reality.

It makes less and less sense to move massive amounts of data and more and more sense to move the computes and network closer to where the data lives. This is the core architectural problem.

But as with any potentially category-busting product, explaining it is a problem: it doesn’t fit into our common categories. The MPDP is a network, a fabric, a storage system and an enabling foundation for analytical apps.

That’s a key point anchoring the following discussion. The MPDP is intended to interoperate and integrate with existing apps and protocols, such as MPI and Hadoop.

Their product consists of several pieces. Taking it from the top.

The software overlaying the hardware infrastructure is the Build your Own Storage or ByOS. Based on Linux it manages the underlying structures needed for the single namespace parallel filesystem: a striper engine; and a data plane network.

The striper engine works across all nodes and eliminates the need for metadata synchronization by offloading metadata updates to local file systems. All I/O is replicated across multiple nodes for robustness and scalability. This enables parallelization and integrates with the dedicated parallel hardware architecture, while eliminating the need for RAID hardware.

The data plane is a dedicated internode network based on a traffic coprocessor – the RONNIEE Express Fabric – designed to provide low latency and high bandwidth for up to 64,000 nodes, without expensive switches, in a sophisticated mesh torus topology. User traffic remains on a front-end network.

Graphic courtesy of the University of Wisconsin Graphic courtesy of the University of Wisconsin

The data plane network provides an “in-memory network” where internode accesses appear to the local node as a local memory access. No network stack and a globally shared memory architecture for maximum performance that is 6 to 8 times faster then Gigabit Ethernet or DDR Infiniband according to A3Cube.

A3Cuber performance

The RONNIEE interface card is part of the storage and compute node. The node is a server loaded with as much go-fast hardware – SSDs, cores, other coprocessors, Infiniband, 10GigE – as you can afford, plus your favorite apps. A3Cube also provides software modules for system control, scheduling and computation.

The StorageMojo take
Congratulations to A3Cube for coming up with a creative answer to the question of massive scaling of massive data. The architecture makes sense.

Concerns include A3Cube’s HPC focus, a technical issue with PCIe 3, financing and competition.

  • HPC has been the graveyard for much great technology for two reasons: it’s much more fun – i.e. distracting – for dev teams than typical commercial markets; and, the highly variable spend thanks to the national labs. Not many commercial products come out of an HPC focus. Big data – including some HPC – is a better focus.
  • The RONNIEE Express runs on PCIe 2.0 but not on PCIe 3.0 due to a device enumeration issue. But they expect to be able to support PCIe 4.0 whenever that comes out.
  • Funding. A3Cube has been privately funded to the tune of about $1 million, and they need a more to go big. I don’t see investors funding an HPC startup.
  • Competition. PLX Technology is already out there with not-as-scalable PCIe 3.0 fabrics. Not as elegant, but available now.

None of these should be deal killers. A3Cube’s issues are fixable with money and time and I hope they get both.

Their storage-centric approach and fast mesh architecture feels right. Fundamental architectural innovation is still alive and that’s a very good thing.

Courteous comments welcome, of course.

{ 1 comment }

Is software’s free ride over?

by Robin Harris on Thursday, 27 March, 2014

It’s been a rule of thumb for the last 30+ years that any functionality implemented in hardware will surely migrate to software. But that is starting to change.

At the beginning of a new application – say RAID controllers – the volumes are low and the trade-offs poorly understood. Perfect for FPGAs, which are relatively costly per unit, but flexible and easily updated.

Once the application is better understood the cycle-intensive bits can be optimized and hardware accelerated in ASICs, which are cheaper than FPGAs in higher volumes.

But the movement to all software comes when CPUs are fast enough to run the software without extra hardware acceleration. Most RAID controllers have been all software running on standard x86 CPUs for the last 8 years or so.

There’s a new sheriff in town
Sheriff Moore – and he’s telling you to slow down.

Moore’s Law – the doubling of the number of transistors on a chip every 18-24 months – is still hanging on. But the assumed performance increase that has accompanied that is not. Haswell processors are barely faster than Ivy Bridge and the next gen Broadwell performance improvements will be marginal as well – except for graphics.

Transistors no longer get faster as they get smaller. So extra transistors are used to speed up common functions. Codec acceleration. Fancier graphics.

Great stuff! But it means that today’s FPGA-based hardware is much more likely to remain hardware. And the comforting assumption that we can put all the cool stuff in software real soon is no longer operative.

The StorageMojo take
The entire infrastructure world’s acceleration is slowing down. CPUs aren’t much faster. Storage densities aren’t improving as they were 10 years ago. Network speeds and more importantly costs aren’t dropping as they were.

Several things have obscured the trend and blunted its impact: multiprocessing; NAND flash SSDs; 10GigE uptake; massively parallel GPUs; scale-out architectures. But those too are less and less effective.

The current mania for Software Defined Everything relies on the availability of unused hardware infrastructure resources. That was VMware’s original value prop. But over the next decade – unless there are some fundamental breakthroughs not now visible – that will change.

It’s not the end of Software Defined Everything, but you can see it from here.

Courteous comments welcome, of course. If you see it differently, why?

{ 3 comments }

OpenStack Swift software defined storage

by Robin Harris on Tuesday, 18 March, 2014

Back in 2006 – before Barack Obama was famous – StorageMojo evaluated the Google File System and concluded

Looking at the whole gestalt, even assuming GFS were for sale, it is a niche product and would not be very successful on the open market.

As a model for what can be done however, it is invaluable. The industry has strived for the last 20 years to add availability and scalability to an increasingly untenable storage model of blocks and volumes, through building ever-costlier “bulletproof” devices.

GFS breaks that model and shows us what can be done when the entire storage paradigm is rethought. Build the availability around the devices, not in them, treat the storage infrastructure as a single system, not a collection of parts, extend the file system paradigm to include much of what we now consider storage management, including virtualization, continuous data protection, load balancing and capacity management.

GFS is not the future. But it shows us what the future can be.

The future is now downloadable
Software Defined Storage with OpenStack Swift by Joe Arnold, CEO of SwiftStack, describes a GFS-like object storage system.

  • Commodity hardware. That doesn’t have to be at the same rev level or generation, servers and disks.
  • 3x replication and smart data placement. Like the original GFS.
  • Object storage. The only way to fly for high scale.

book_cover
But Swift is more than GFS. It supports user accounts and managing those accounts. S3 API support. Global clusters. Authentication services. Block storage. And more that is needed outside Google’s data centers.

The StorageMojo take
Ten years ago, when I started blogging, there was just one way to do enterprise IT: buy – or maybe lease – storage and networking boxes from vendors with gross margins in the 60% range. That expense sank many dot bomb startups even faster than their iffy business models.

Virtual servers were the hot new thing thanks to their intoxicating flexibility. The intoxication continues.

If VMware was the gateway drug to data center virtualization, Amazon Web Services was the 2×4 head thwack that CIOs needed to stop playing golf with vendors and start playing a new game: adapting scale-out strategies to enterprise-scale computing. While not everyone agrees, these strategies are key to bringing big data apps in-house – the first step to applying these to our cooling storage needs.

OpenStack Swift is a good example of the trend. SwiftStack’s book is an excellent intro to implementation at the enterprise level. The technology is there, but that doesn’t answer the CFO’s questions about how this saves money and positions the firm for the future.

That’s for another post – or two.

Courteous comments welcome, of course. Joe’s book is available from SwiftStack in hardcopy or as an eBook.

{ 2 comments }