Frisky Gen-Z’s to battle boomer Intel

by Robin Harris on Monday, 24 October, 2016

The internalization of storage is spawning another war – this time in memory interconnects. From Anandtech:

This week sees the launch of the Gen-Z Consortium, featuring names such as ARM, Huawei, IBM, Mellanox, Micron, Samsung, SK Hynix and Xilinx, with the purpose of designing a new memory semantic fabric that focuses on the utilization of ‘on-package and storage class memoryʼ (HMC, phase-change, 3D XPoint etc) at the hardware level.

Let’s unpack that.
A bunch of companies, not including Intel or Microsoft, wants to build a standard for using memory as storage. The semantics refer to the limited instruction set that the new bus will use – reads, writes, load, store, put/get – for block-based storage class memory.

Further, they plan to scale the interconnect from nodes – inside the server – to racks.

The consortium has lofty goals:

  • Memory media independence. Enable any type and mix of DRAM and non-volatile memory (NVM) to be directly accessed by applications or through block-semantic communications.
  • High-bandwidth, low-latency. Efficient, memory-semantic protocol supporting a range of signaling rates and link widths that scale from 10s to 100s of GB/s of bandwidth.
  • Multipath. High signaling rates (up to 112 GT/s), and traffic segregation so services and applications may be isolated.
  • Scalability. From point-to-point to rack-scale, switch-based topologies.
  • Oh, yeah. Cheap too, uses existing form factors and cables.

This all sounds great. But here’s the kicker:

Gen-Z supports a wide variety of component types including processors, memory modules, FPGAs, GPU / GPGPU, DSP, I/O, accelerators, NICs, custom ASICs, and many more.

Processors? And Intel isn’t on board? Micron, of course, is Intel’s partner for 3D XPoint, and, if you look at the ownership of their JV, is in charge. But I think they want to keep Intel happy.

The StorageMojo take
Hurrah for Gen-Z! I like what they’re trying to do, even if the Gen-Z everywhere and anywhere over anything message threatens to fragment the effort into a dozen or more “standard” but incompatible implementations.

Give people a lot of options and they’ll take ’em. Even though they will rarely all choose the same ones.

The larger issue is that the consortium members don’t want to surrender storage class memory to Intel’s tender mercies. And that I support too.

But on-chip interfaces to SCM will be way more performant than off-chip. The Gen-Z Gang of Eight faces an uphill fight, at least in x86 land. OTOH, Intel may face anti-trust scrutiny if they are too aggressive in locking out competing technologies.

I’ll make some popcorn. This will be fun to watch.

Courteous comments welcome, of course.


ClearSky: object storage at enterprise block speed

by Robin Harris on Monday, 17 October, 2016

Can object storage ever be as fast as block storage? It turns out the answer is yes.

And we already know how to do it.clear_sky_logo

I was speaking to the CTO of ClearSky Data, Laz Vekiarides, about their block storage system for enterprise applications. They offer

. . . a Global Storage Network that manages the entire enterprise data lifecycle, as a fully-managed service.

ClearSky is a cloud-based service that makes some usual and unusual promises:

  • Eliminate storage silos.
  • Pay as you grow – and populate thin-provisioned volumes.
  • On-premise performance + cloud scale.
  • Multi-site data access without replication.
  • Fully managed, 24×7 support.
  • Guaranteed 99.999% uptime.
  • Consumption-based pricing.
  • Substantially lower cost than legacy arrays AND AWS EBS.

It was the last promise that got me really interested. How do you provide cloud-based block storage at a substantially lower price than Amazon offers it, using Amazon’s infrastructure, while making it fast enough for transactional workloads?

The answer we already had
Cloud storage: high latency and limited bandwidth. Sounds like a disk, doesn’t it?

Let’s see, what did we do to make disk performance work? Oh, yes, caching.

Which is, essentially, what ClearSky does: they put a big, fast, scalable, SSD cache in front of cloud storage to provide Big Iron array performance, without Big Iron’s insupportable costs. The 2U rackmount caches – up to 32TB each – are highly redundant, clusterable for growth, and connect to a metro Ethernet POP over a private network.

Of course, there’s much more to what ClearSky does than this. Their Smart Tiering keeps track of hot, warm, and cold data. They have special POPs – in Boston, New York, northern Virginia, and Las Vegas for now – that dramatically reduce the latency that their edge appliances have to deal with.

They simplify storage management as well. Customers only have to manage LUNs and such, not the physical devices and interconnects. DR is built-in, if you have two or more IT locations. And more.

Bottom line: ClearSky offers a replacement for a VMAX array for a fraction of the cost.

But here’s the cool thing: ClearSky stores your data in the cloud as objects, not blocks. That’s how they can offer 4k block storage for a fraction of the cost of Amazon’s Elastic Block Storage.

Blocks into objects
So how do you serve blocks and store objects? While in theory there’s no reason why objects couldn’t be 4k each, the overhead required to keep track them would overwhelm the system with detailed metadata. Something has to give.

The local edge cache stores blocks. But when the blocks are moved into the POP object store, they are concatenated into 4MB objects. When a block is accessed, the system first goes to the 4MB object, which keeps track of its own 1,000 4k blocks, and extracts the block.

The POPs are equipped with SSDs to keep track of the metadata, so the lookups are fast and, if the data is warm (cached), the block read is too. Since the metro POP latency is 1-2ms, even the occasional block read from the POP is as good as traditional arrays.

The StorageMojo take
ClearSky should be on anyone’s shortlist for fast block storage with cloud pricing. I’ve only scratched the surface of what they’ve got. Their security story – end-to-end AES-256 in transit and at rest, with keys stored locally with TPM key management – is also impressive.

But making object storage really fast is a key problem for the coming decade. It looks like ClearSky has figured out how to do it.

Courteous comments welcome, of course. Updatee to correct ClearSky’s preferred spelling.


Everspin’s MRAM IPO

by Robin Harris on Monday, 10 October, 2016

Everspin has filed for their IPO. They’re looking to raise $40 million from the public market. They’ve been shipping product for over 10 years, so this is a real company, not a dream and a slide deck.

Everspin’s Magnetic RAM has a number of advantages over flash and DRAM:

  • DDR RAM write latency – much faster than 3D XPoint and much, much faster than flash.
  • Endurance that is much higher than flash and 3D XPoint.
  • Byte addressable, like DRAM and unlike flash.
  • Can replace DRAM on DIMMs – no need for complex controllers for wear leveling and garbage collection.
  • The latest MRAM gen is built on a simple 3-layer process, which means that it could, at high volumes, be cheaper than DRAM.

The logical question then is: why hasn’t MRAM killed DRAM? A couple of key reasons:

  • Cost. MRAM could be as cheap as DRAM, but only after climbing down the learning curve, where DRAM has a 50 year head start.
  • Density. Everspin will be sampling a 1 Gbit chip later this year, far behind the 8 Gbit DRAM chips commonly available today.

But even with a cost-per-bit 10x that of DRAM, MRAM has a defensible niche as it ups production and lowers cost. It replaces DRAM-based NVDIMMs because it doesn’t need the batteries and assorted packaging cruft those require. It also works nicely as a buffer, where speed, endurance and non-volatility are vital, and its cost is buried in a larger product.

The filing
Everspin’s S1 shows that the company is doing about $25 million a year, and spending about $10m a year on R&D. Despite gross margins in the 50% range, their investments in growing the business have them on track to lose about $18m this year.

The StorageMojo take
Given Wall Street’s warm welcome for Nutanix I expect that Everspin will have a successful IPO. It may finally be sinking in with the investment community that storage is the most valuable part of our IT infrastructure.

The larger picture is that the IPO demonstrates that NVRAM technologies are real, that 3D XPoint isn’t the only contestant, and that the pace of change in the storage market is still accelerating. These are all Good Things.

StorageMojo wishes Everspin the best of luck on their IPO.

Courteous comments welcome, of course.


Is 3D XPoint in trouble?

by Robin Harris on Thursday, 6 October, 2016

The Register’s Chris Mellor and SemiAccurate’s Charlie Demerjian are throwing shade on Intel’s claims for 3D XPoint. While it’s great fun to tweak the giants of tech – as I often do – I think they are likely wrong in their interpretation of the data backing their arguments.

As Mr. Mellor wrote:

In contrast to the wildly optimistic Intel “1,000 times faster than NAND” claims when XPoint was launched, the Micron pitch presented a 10-times improvement over NAND in terms of IOPS and latency, and four times more memory footprint than DRAM per CPU.

Mr. Demerjian goes further in his comments:

Latency missed by 100x, yes one hundred times, on their claim of 1000x faster, 10x is now promised and backed up by tests. More troubling is endurance, probably the main selling point of this technology over NAND. Again the claim was a 1000x improvement, Intel delivered 1/333rd of that with 3x the endurance.

Chips vs systems
Intel and Micron are chip folks. I’m confident that at the chip/media level, the results that Intel reported are close – despite marketing rounding-up – but when talking about SSDs, Optane or QuantX, the subject is a system, not media. A system with a CPU, lots of software, buffers, caches, bandwidth and, finally, media.

Back when RAID arrays were a radical new technology in the early 90s, the simple fact that you could gang together a half dozen disks and get a 5x performance improvement was a big deal. A little later caching came on the scene, and made RAID even more compelling.

But did adding in a layer of RAM – which was, let’s say, a million times faster – make RAID arrays a million faster? Of course not. The difference between write-back and write-through caching – and the associated engineering/test problems – played a major role as well.

Nor did replacing hard drives with SSDs make arrays as fast as raw NAND read numbers would suggest. Systems consist of cooperating parts, and making one part a thousand times faster doesn’t make all the other parts go faster too.

Mr. Mellor, of course, understands this, remarking:

We might say that the original XPoint claims referred to raw media comparisons and not system-level performance, a point not made clear in the original XPoint performance, density and endurance claims.

The StorageMojo take
Intel is not a marketing company. They are an engineering company selling to engineers and, as such, they often put a foot wrong when trying to reach the general tech public.

But that doesn’t explain why Intel/Micron rushed the 3D XPoint announcement: freeze out competitors; help Micron fight an acquisition; and/or panic at some perceived threat, such as Nantero? They might have consulted with their CPU colleagues on the problems that come from hyping, say, clock speeds, in the NetBurst days.

Whatever drove the announcement, they did themselves no favors by focusing on cell-level performance. But unlike those who read the announcement as a promise to deliver systems with those performance improvements, I’m pleased that they got into the market with 3D XPoint, because they are driving the industry faster than any startup could.

And that’s good for the industry and for computer systems buyers and users everywhere. The problems that I can see, such as schedule slips, appear normal for advanced technology.

Courteous comments welcome, of course.


Nutanix IPO: the big score

by Robin Harris on Wednesday, 5 October, 2016

Nutanix – NTNX – has started off with a bang: opening at $16 a share and quickly rising to almost $30. It’s trading at $36 as I write.

Now for the hard part
Everyone is no doubt counting how much they’ve made on that spectacular beginning. But there’s a six month lock-in, meaning insiders have to hold their shares for at least six months.

And a lot can happen in six months.

All it takes is a single quarterly miss and Wall Street will savage the stock. There goes the Ferrari!

So buckle down and don’t screw up!

The StorageMojo take
Nutanix was co-founded by one of the Google File System engineers.
As I wrote about GFS over 10 years ago:

Looking at the whole gestalt, even assuming GFS were for sale, it is a niche product and would not be very successful on the open market.

As a model for what can be done however, it is invaluable. The industry has strived for the last 20 years to add availability and scalability to an increasingly untenable storage model of blocks and volumes, through building ever-costlier “bulletproof” devices.

GFS breaks that model and shows us what can be done when the entire storage paradigm is rethought. Build the availability around the devices, not in them, treat the storage infrastructure as a single system, not a collection of parts, extend the file system paradigm to include much of what we now consider storage management, including virtualization, continuous data protection, load balancing and capacity management.

GFS is not the future. But it shows us what the future can be.

In Nutanix, the future has arrived.

Courteous comments welcome, of course.

{ 1 comment }

Dell vets: buff up your resumes this weekend

by Robin Harris on Friday, 9 September, 2016

Now that Dell has completed the EMC acquisition, you are in for a rude awakening. While Dell may own EMC, EMC owns you.

Richard Egan, one of the founders os EMC, fostered an exceptionally aggressive sales culture. The company liked to hire guys from blue collar families who’d played football in college, and then set them in a competitive culture where, if successful, they could make $500,000 a year.

All company travel was done on personal time, not business hours. Reps weren’t allowed to get too comfortable with their territories and accounts: after a year or two of success, your budget would be upped and/or you’d get some new accounts. And miss your number for a couple of quarters? You’d be MIA.

The pressure was intense. When I was at Sun, a customer told us that he’d had to call security to remove an EMC rep who was screaming at him for buying Sun instead of the EMC kit she’d been counting on closing.

A DELLicate transition
As EMC has acquired other companies with different cultures, and as cost pressures have grown, EMC CEO Joe Tucci tamped down the macho go-for-the-throat culture of Egan’s EMC. But make no mistake, EMC is still an aggressive sales machine.

Forget the who bought who details. It’s not uncommon for the acquired company personnel to elbow out the acquiring company’s incumbents. After all, if the acquiring company had the skills they wanted in-house, why go outside?

Dell’s storage initiatives, while well-intentioned, suffered from two problems, one normal and one not. The normal problem is when a large company acquires a small one, the small company gets engulfed in meeting a thousand new requirements and process hoops, while at the same time trying to get the large company sales force to start aggressively flogging their gear.

The abnormal problem: Mr. Dell never knew what he didn’t know about storage, so he never put the emphasis on changing Dell’s culture to make it happen. Since storage sales are harder than server sales, the Dell sales force has never been interested, while Dell’s storage marketing team didn’t have Mr. Dell’s support to overcome sales inertia. For server sales teams, the forecast calls for pain.

But the carnage won’t stop there
EMC has a deep bench, both in sales and operations, as well as way more technology than Dell has ever seen. When there’s an internal hire to be made, the EMC candidate is likely to have more relevant experience.

That’s not all. Since Dell has wildly overpaid for EMC, and the global economy is weak, the way to higher profits is through cost cuts. Headcount will get a serious cut over the next 24 months.

The StorageMojo take
I’d be a little more optimistic if Mr. Dell had turned the reins over to Joe Tucci, who’s one of the smartest CEOs in tech. Not that that would help Dell vets, but it would get the transition running smoother, sooner.

Instead we’ll likely get Mr. Dell’s usual flailing about and multiple misfires, especially as the enormity of this acquisition sinks in.

If I were a Dell vet, I’d rather jump than be pushed.

Courteous comments welcome, of course.


Artisanal science doesn’t scale

September 8, 2016

Big data will overwhelm artisanal science. That’s what I conclude from a recent paper that lays out the stark statistics: Science is a growing system, exhibiting 4% annual growth in publications and 1.8% annual growth in the number of references per publication. Together these growth factors correspond to a 12-year doubling period in the total […]

0 comments Read the full article →

Nantero NRAM: ARM’d and dangerous

September 7, 2016

Intel’s 3D XPoint non-volatile RAM has sucked up most of the attention in the NVRAM space, but Nantero’s NRAM has taken a giant step forward. So far forward that Intel may get ARM’d again if they aren’t careful. NRAM? Nantero is the 15 year old startup pioneering carbon nano-tube RAM, or NRAM. 15 years is […]

1 comment Read the full article →

Notes on VMworld 2016

August 31, 2016

Spent the day on the show floor at Vmworld 2016 in sunny Las Vegas. Saw some interesting things. Panzuraa now offers byte-range locking on their global collaboration platform. They’ve been having great success in the Autodesk Revit market. M&E seems like a natural as well. This is hard to do and few have done it […]

0 comments Read the full article →

VMworld next week

August 26, 2016

The StorageMojo crack analyst team is busy polishing their cowboy boots and ironing their jeans to get respectable (why now?) for next week’s VMworld in Las Vegas. Las Vegas is a short – by Western standards – 4 to 5 hour drive from the high pastures of northern Arizona, and a favorite place for the […]

0 comments Read the full article →

Excel may be dangerous to your health – and your nation

August 26, 2016

Over on ZDNet I’ve been doing a series looking at the issues we face incorporating Big Data into our digital civilization (see When Big Data is bad data, Lying scientists and the lying lies they tell, and Humans are the weak link in Big Data. I’m not done yet, but I wanted to share a […]

5 comments Read the full article →

NetApp’s surprising Q1

August 23, 2016

NetApp’s Q1 was a happy surprise for Wall Street: earnings blew past estimates and the stock spiked over 16%. But the quarterly 8k report was more downbeat. Product revenues Net revenue was down $41 million year over year. Products the company calls Strategic – presumably hybrid cloud and flash, but not defined in the 8k […]

1 comment Read the full article →

World’s largest manufacturer of vinyl records

August 22, 2016

A story from the byways of data storage. Vinyl audio records have been making something of a comeback. Fans prefer the sound, and DJs like to “scratch” them, which is pretty cool the first hundred times you hear it. A series of pieces in the UK paper the Guardian, describes the current state of vinyl, […]

1 comment Read the full article →

Flash Memory Summit next week

August 1, 2016

And sad to say, for the first time in years, StorageMojo won’t be there. Dang it! A physical condition is cramping my style. It’s temporary and will be fixed by early next year. So I’ll be looking for whatever gets posted online, but missing the show floor. The StorageMojo take For a few years the […]

0 comments Read the full article →

A look at Symbolic IO’s patents

July 22, 2016

Maybe you saw the hype: Symbolic IO is the first computational defined storage solution solely focused on advanced computational algorithmic compute engine, which materializes and dematerializes data – effectively becoming the fastest, most dense, portable and secure, media and hardware agnostic – storage solution. Really? Dematerializes data? This amps it up from using a cloud. […]

2 comments Read the full article →