Cloud market heats up

by Robin Harris on Friday, 18 November, 2016

In 2014 Gartner estimated that Amazon Web Services had 5x the utilized compute capacity of the rest of the cloud providers. There’s a couple of qualifiers there – utilized, compute – but as a rough guess, it looked like AWS had around an 80% market share.

But no more. In a recent report Synergy Research Group noted that AWS has a 45% share in IaaS, ≈30% share in PaaS, and is well behind IBM in Managed Private Cloud.

Here’s SRG’s chart:
Synergy_Research_Group

They also report that Microsoft and Google, while smaller in absolute share, are growing much faster than AWS – which is always easier from a smaller base.

The StorageMojo take
It’s rare that a company who grabs an early lead can hang onto it. That’s what made IBM’s domination of the computer industry for decades unusual.

A few years ago it looked like AWS was set for IBM-like dominance of the cloud space. These numbers, if accurate, suggest that AWS days of dominance are numbered.

Further, IBM’s strong showing in private cloud points to the value of a direct sales force. Microsoft is doing a good job leveraging the Windows franchise, but I expect even greater things from Microsoft Research.

Competition is good. The cloud market is getting healthy dose.

Courteous comments welcome, of course.

{ 0 comments }

When is a feature a bug?

by Robin Harris on Thursday, 17 November, 2016

Ten years ago in Enterprise IT: the elephant’s graveyard I wrote about the upmarket trap:

Engineering and marketing find it easy to justify fun new technology since a 10% goodness increase on a $500,000 machine is worth $50,000, while on a $1,000 machine it is worth $100 only if the customer is knowledgeable enough to notice. Which they aren’t, thank you very much.

So you keep moving to the high margin upmarket, ceding lower margin business to others. Finally you’re at the top of the upmarket, with all the costs and bloat, and then what?

As we climb upmarket we eventually see a declining return on investment in new features, and an increase in support costs, because less used code paths, more support training, and customers love to get creative with little used (or thought through) features.

The optimization trap
Is there a similar trap developing in cloud services? An optimization trap?

Take Cisco routers, poster child for the optimization trap. They kept adding features that customers requested. Customers liked that. So did Cisco, because lock-in.

Fast forward a decade and there’s 10 million lines of router code. The code is buggy. Maintenance and support costs are high. But Cisco is very profitable and has a lock on the market.

Which worked until the hyperscale guys realized that network costs were the fastest growing part of their infrastructure costs. Not a huge part of their costs, but PhDs can do the math and see higher bad, lower good.

Hyperscale data centers are commoditized. They don’t need, or want, thousands of corner-case features. In fact, they know exactly what they need, and build only that.

Way cheaper. And faster. More robust too. Enterprise IT guys, under pressure from the cloud’s much lower cost model, piped in, “hey, share the goodness!”

Cisco’s old business model is hurting.

Not only networks
Similar story in enterprise storage. Customers have two overriding requirements: availability and performance. RAID promised to provide both and, after a few years, did.

Enter lily-gilding, driving up costs, practically mandating fibre channel SANS. Today? Thanks to SSDs and cheapish RAM, a healthy share of the enterprise storage market has moved to the cloud and another, larger, piece has moved back into servers.

The StorageMojo take
Traps doesn’t occur in all, or even most, markets. Commodities are still commodities. Consumer good improvements raise a bar, not an umbrella.

But it was endemic to the enterprise IT market. Key elements include:

  • A single dominant paradigm.
  • Customers too busy to analyze needs and trade-offs.
  • High margin legacy vendors where disruption offers plenty of margin dollars.
  • High operation costs.
  • And key: outside players with the scale and resources to go their own way.

AWS is in danger of falling into the trap. Great margins. Dominant market share. Sticky data movement costs. More costly than those penny per month prices seem.

AWS advertises the number of new features – “services” – they add each year. Today that’s still a feature. But tomorrow?

If you attend the AWS re:Invent conference this month keep your antennae attuned for a giant reaching the top of a mountain. And let me know how close you think he is.

Courteous comments welcome, of course.

{ 1 comment }

Frisky Gen-Z’s to battle boomer Intel

by Robin Harris on Monday, 24 October, 2016

The internalization of storage is spawning another war – this time in memory interconnects. From Anandtech:

This week sees the launch of the Gen-Z Consortium, featuring names such as ARM, Huawei, IBM, Mellanox, Micron, Samsung, SK Hynix and Xilinx, with the purpose of designing a new memory semantic fabric that focuses on the utilization of ‘on-package and storage class memoryʼ (HMC, phase-change, 3D XPoint etc) at the hardware level.

Let’s unpack that.
A bunch of companies, not including Intel or Microsoft, wants to build a standard for using memory as storage. The semantics refer to the limited instruction set that the new bus will use – reads, writes, load, store, put/get – for block-based storage class memory.

Further, they plan to scale the interconnect from nodes – inside the server – to racks.

The consortium has lofty goals:

  • Memory media independence. Enable any type and mix of DRAM and non-volatile memory (NVM) to be directly accessed by applications or through block-semantic communications.
  • High-bandwidth, low-latency. Efficient, memory-semantic protocol supporting a range of signaling rates and link widths that scale from 10s to 100s of GB/s of bandwidth.
  • Multipath. High signaling rates (up to 112 GT/s), and traffic segregation so services and applications may be isolated.
  • Scalability. From point-to-point to rack-scale, switch-based topologies.
  • Oh, yeah. Cheap too, uses existing form factors and cables.

This all sounds great. But here’s the kicker:

Gen-Z supports a wide variety of component types including processors, memory modules, FPGAs, GPU / GPGPU, DSP, I/O, accelerators, NICs, custom ASICs, and many more.

Processors? And Intel isn’t on board? Micron, of course, is Intel’s partner for 3D XPoint, and, if you look at the ownership of their JV, is in charge. But I think they want to keep Intel happy.

The StorageMojo take
Hurrah for Gen-Z! I like what they’re trying to do, even if the Gen-Z everywhere and anywhere over anything message threatens to fragment the effort into a dozen or more “standard” but incompatible implementations.

Give people a lot of options and they’ll take ’em. Even though they will rarely all choose the same ones.

The larger issue is that the consortium members don’t want to surrender storage class memory to Intel’s tender mercies. And that I support too.

But on-chip interfaces to SCM will be way more performant than off-chip. The Gen-Z Gang of Eight faces an uphill fight, at least in x86 land. OTOH, Intel may face anti-trust scrutiny if they are too aggressive in locking out competing technologies.

I’ll make some popcorn. This will be fun to watch.

Courteous comments welcome, of course.

{ 0 comments }

ClearSky: object storage at enterprise block speed

by Robin Harris on Monday, 17 October, 2016

Can object storage ever be as fast as block storage? It turns out the answer is yes.

And we already know how to do it.clear_sky_logo

I was speaking to the CTO of ClearSky Data, Laz Vekiarides, about their block storage system for enterprise applications. They offer

. . . a Global Storage Network that manages the entire enterprise data lifecycle, as a fully-managed service.

ClearSky is a cloud-based service that makes some usual and unusual promises:

  • Eliminate storage silos.
  • Pay as you grow – and populate thin-provisioned volumes.
  • On-premise performance + cloud scale.
  • Multi-site data access without replication.
  • Fully managed, 24×7 support.
  • Guaranteed 99.999% uptime.
  • Consumption-based pricing.
  • Substantially lower cost than legacy arrays AND AWS EBS.

It was the last promise that got me really interested. How do you provide cloud-based block storage at a substantially lower price than Amazon offers it, using Amazon’s infrastructure, while making it fast enough for transactional workloads?

The answer we already had
Cloud storage: high latency and limited bandwidth. Sounds like a disk, doesn’t it?

Let’s see, what did we do to make disk performance work? Oh, yes, caching.

Which is, essentially, what ClearSky does: they put a big, fast, scalable, SSD cache in front of cloud storage to provide Big Iron array performance, without Big Iron’s insupportable costs. The 2U rackmount caches – up to 32TB each – are highly redundant, clusterable for growth, and connect to a metro Ethernet POP over a private network.

Of course, there’s much more to what ClearSky does than this. Their Smart Tiering keeps track of hot, warm, and cold data. They have special POPs – in Boston, New York, northern Virginia, and Las Vegas for now – that dramatically reduce the latency that their edge appliances have to deal with.

They simplify storage management as well. Customers only have to manage LUNs and such, not the physical devices and interconnects. DR is built-in, if you have two or more IT locations. And more.

Bottom line: ClearSky offers a replacement for a VMAX array for a fraction of the cost.

But here’s the cool thing: ClearSky stores your data in the cloud as objects, not blocks. That’s how they can offer 4k block storage for a fraction of the cost of Amazon’s Elastic Block Storage.

Blocks into objects
So how do you serve blocks and store objects? While in theory there’s no reason why objects couldn’t be 4k each, the overhead required to keep track them would overwhelm the system with detailed metadata. Something has to give.

The local edge cache stores blocks. But when the blocks are moved into the POP object store, they are concatenated into 4MB objects. When a block is accessed, the system first goes to the 4MB object, which keeps track of its own 1,000 4k blocks, and extracts the block.

The POPs are equipped with SSDs to keep track of the metadata, so the lookups are fast and, if the data is warm (cached), the block read is too. Since the metro POP latency is 1-2ms, even the occasional block read from the POP is as good as traditional arrays.

The StorageMojo take
ClearSky should be on anyone’s shortlist for fast block storage with cloud pricing. I’ve only scratched the surface of what they’ve got. Their security story – end-to-end AES-256 in transit and at rest, with keys stored locally with TPM key management – is also impressive.

But making object storage really fast is a key problem for the coming decade. It looks like ClearSky has figured out how to do it.

Courteous comments welcome, of course. Updatee to correct ClearSky’s preferred spelling.

{ 4 comments }

Everspin’s MRAM IPO

by Robin Harris on Monday, 10 October, 2016

Everspin has filed for their IPO. They’re looking to raise $40 million from the public market. They’ve been shipping product for over 10 years, so this is a real company, not a dream and a slide deck.

Why MRAM?
Everspin’s Magnetic RAM has a number of advantages over flash and DRAM:

  • DDR RAM write latency – much faster than 3D XPoint and much, much faster than flash.
  • Endurance that is much higher than flash and 3D XPoint.
  • Byte addressable, like DRAM and unlike flash.
  • Can replace DRAM on DIMMs – no need for complex controllers for wear leveling and garbage collection.
  • The latest MRAM gen is built on a simple 3-layer process, which means that it could, at high volumes, be cheaper than DRAM.

The logical question then is: why hasn’t MRAM killed DRAM? A couple of key reasons:

  • Cost. MRAM could be as cheap as DRAM, but only after climbing down the learning curve, where DRAM has a 50 year head start.
  • Density. Everspin will be sampling a 1 Gbit chip later this year, far behind the 8 Gbit DRAM chips commonly available today.

But even with a cost-per-bit 10x that of DRAM, MRAM has a defensible niche as it ups production and lowers cost. It replaces DRAM-based NVDIMMs because it doesn’t need the batteries and assorted packaging cruft those require. It also works nicely as a buffer, where speed, endurance and non-volatility are vital, and its cost is buried in a larger product.

The filing
Everspin’s S1 shows that the company is doing about $25 million a year, and spending about $10m a year on R&D. Despite gross margins in the 50% range, their investments in growing the business have them on track to lose about $18m this year.

The StorageMojo take
Given Wall Street’s warm welcome for Nutanix I expect that Everspin will have a successful IPO. It may finally be sinking in with the investment community that storage is the most valuable part of our IT infrastructure.

The larger picture is that the IPO demonstrates that NVRAM technologies are real, that 3D XPoint isn’t the only contestant, and that the pace of change in the storage market is still accelerating. These are all Good Things.

StorageMojo wishes Everspin the best of luck on their IPO.

Courteous comments welcome, of course.

{ 2 comments }

Is 3D XPoint in trouble?

by Robin Harris on Thursday, 6 October, 2016

The Register’s Chris Mellor and SemiAccurate’s Charlie Demerjian are throwing shade on Intel’s claims for 3D XPoint. While it’s great fun to tweak the giants of tech – as I often do – I think they are likely wrong in their interpretation of the data backing their arguments.

As Mr. Mellor wrote:

In contrast to the wildly optimistic Intel “1,000 times faster than NAND” claims when XPoint was launched, the Micron pitch presented a 10-times improvement over NAND in terms of IOPS and latency, and four times more memory footprint than DRAM per CPU.

Mr. Demerjian goes further in his comments:

Latency missed by 100x, yes one hundred times, on their claim of 1000x faster, 10x is now promised and backed up by tests. More troubling is endurance, probably the main selling point of this technology over NAND. Again the claim was a 1000x improvement, Intel delivered 1/333rd of that with 3x the endurance.

Chips vs systems
Intel and Micron are chip folks. I’m confident that at the chip/media level, the results that Intel reported are close – despite marketing rounding-up – but when talking about SSDs, Optane or QuantX, the subject is a system, not media. A system with a CPU, lots of software, buffers, caches, bandwidth and, finally, media.

Example
Back when RAID arrays were a radical new technology in the early 90s, the simple fact that you could gang together a half dozen disks and get a 5x performance improvement was a big deal. A little later caching came on the scene, and made RAID even more compelling.

But did adding in a layer of RAM – which was, let’s say, a million times faster – make RAID arrays a million faster? Of course not. The difference between write-back and write-through caching – and the associated engineering/test problems – played a major role as well.

Nor did replacing hard drives with SSDs make arrays as fast as raw NAND read numbers would suggest. Systems consist of cooperating parts, and making one part a thousand times faster doesn’t make all the other parts go faster too.

Mr. Mellor, of course, understands this, remarking:

We might say that the original XPoint claims referred to raw media comparisons and not system-level performance, a point not made clear in the original XPoint performance, density and endurance claims.

The StorageMojo take
Intel is not a marketing company. They are an engineering company selling to engineers and, as such, they often put a foot wrong when trying to reach the general tech public.

But that doesn’t explain why Intel/Micron rushed the 3D XPoint announcement: freeze out competitors; help Micron fight an acquisition; and/or panic at some perceived threat, such as Nantero? They might have consulted with their CPU colleagues on the problems that come from hyping, say, clock speeds, in the NetBurst days.

Whatever drove the announcement, they did themselves no favors by focusing on cell-level performance. But unlike those who read the announcement as a promise to deliver systems with those performance improvements, I’m pleased that they got into the market with 3D XPoint, because they are driving the industry faster than any startup could.

And that’s good for the industry and for computer systems buyers and users everywhere. The problems that I can see, such as schedule slips, appear normal for advanced technology.

Courteous comments welcome, of course.

{ 0 comments }

Nutanix IPO: the big score

October 5, 2016

Nutanix – NTNX – has started off with a bang: opening at $16 a share and quickly rising to almost $30. It’s trading at $36 as I write. Now for the hard part Everyone is no doubt counting how much they’ve made on that spectacular beginning. But there’s a six month lock-in, meaning insiders have […]

1 comment Read the full article →

Dell vets: buff up your resumes this weekend

September 9, 2016

Now that Dell has completed the EMC acquisition, you are in for a rude awakening. While Dell may own EMC, EMC owns you. Richard Egan, one of the founders os EMC, fostered an exceptionally aggressive sales culture. The company liked to hire guys from blue collar families who’d played football in college, and then set […]

0 comments Read the full article →

Artisanal science doesn’t scale

September 8, 2016

Big data will overwhelm artisanal science. That’s what I conclude from a recent paper that lays out the stark statistics: Science is a growing system, exhibiting 4% annual growth in publications and 1.8% annual growth in the number of references per publication. Together these growth factors correspond to a 12-year doubling period in the total […]

1 comment Read the full article →

Nantero NRAM: ARM’d and dangerous

September 7, 2016

Intel’s 3D XPoint non-volatile RAM has sucked up most of the attention in the NVRAM space, but Nantero’s NRAM has taken a giant step forward. So far forward that Intel may get ARM’d again if they aren’t careful. NRAM? Nantero is the 15 year old startup pioneering carbon nano-tube RAM, or NRAM. 15 years is […]

1 comment Read the full article →

Notes on VMworld 2016

August 31, 2016

Spent the day on the show floor at Vmworld 2016 in sunny Las Vegas. Saw some interesting things. Panzuraa now offers byte-range locking on their global collaboration platform. They’ve been having great success in the Autodesk Revit market. M&E seems like a natural as well. This is hard to do and few have done it […]

0 comments Read the full article →

VMworld next week

August 26, 2016

The StorageMojo crack analyst team is busy polishing their cowboy boots and ironing their jeans to get respectable (why now?) for next week’s VMworld in Las Vegas. Las Vegas is a short – by Western standards – 4 to 5 hour drive from the high pastures of northern Arizona, and a favorite place for the […]

0 comments Read the full article →

Excel may be dangerous to your health – and your nation

August 26, 2016

Over on ZDNet I’ve been doing a series looking at the issues we face incorporating Big Data into our digital civilization (see When Big Data is bad data, Lying scientists and the lying lies they tell, and Humans are the weak link in Big Data. I’m not done yet, but I wanted to share a […]

5 comments Read the full article →

NetApp’s surprising Q1

August 23, 2016

NetApp’s Q1 was a happy surprise for Wall Street: earnings blew past estimates and the stock spiked over 16%. But the quarterly 8k report was more downbeat. Product revenues Net revenue was down $41 million year over year. Products the company calls Strategic – presumably hybrid cloud and flash, but not defined in the 8k […]

1 comment Read the full article →

World’s largest manufacturer of vinyl records

August 22, 2016

A story from the byways of data storage. Vinyl audio records have been making something of a comeback. Fans prefer the sound, and DJs like to “scratch” them, which is pretty cool the first hundred times you hear it. A series of pieces in the UK paper the Guardian, describes the current state of vinyl, […]

1 comment Read the full article →