The moving target problem

by Robin Harris on Tuesday, 11 July, 2017

With the news that Toshiba has developed 3D quad-level cell flash with 768Gb die capacity, I’m reminded of the moving target problem. This is a problem whenever a new technology seeks to carve out a piece of an existing technology’s market.

Typically, a startup seeks funding based on producing a competitive product in, say, two years. Good analysis will allow for the fact that competition will improve, typically based on then-current improvement trends.

Often two things happen to derail the projections. The most likely is that the new product development cycle slips out, so when the product ships it is up against another 6-18 months of incumbent improvement.

But sometimes the pace of incumbent improvement rises, so even if the newtech meets its schedule projections – when does THAT ever happen? – it is still facing a tougher competitor than planned.

Disk vs flash
Flash had this problem for a couple of decades with disk. In the early 90s I bought an HP Omnibook 300 and forked over another $400 for a 10MB Compact Flash card to replace the power hungry disk. Some flash proponents probably hoped this was the beginning of a trend.

But it was not to be. Disk vendors discovered how to increase bit density on a regular basis, and disk capacities and areal densities started rising at ≈40% a year. They also built rugged 2.5″ drives for the burgeoning notebook market, and invested in power-saving technologies.

That helped keep flash at bay for another 15 years.

But finally, the flash cost-per-bit dropped below that of DRAM, and the floodgates opened. Flash won the smartphone market, which powered investment in huge fabs, and soon flash prices were dropping faster than disks.

But the key was that flash found niches that disks could not serve. And when one of those niches exploded into industry-altering size, the economics of critical mass and mass production kicked in.

NVRAM
I’ve been following NVRAM with great interest for years. That’s partly due to interest in what it could mean for system architecture, but also for its potential as a substitute for flash.

While it’s clear that the NAND flash cost advantage is good for the next decade, it’s also clear that flash has been shoehorned into applications – such as caches – for which it is suboptimal. NVRAM will encroach around the edges of the flash market, not the heart.

MRAM, for example, is already doing a good business in the automotive and mil-spec sectors, because it is really tough. Diablo’s current hybrid NVDIMMs – combo DRAM and flash – could certainly benefit from a pure NVRAM solution if the price was right.

The key is that NVRAM’s sweet spot is well away from flash’s cost-per-bit and density sweet spots. A fact that Toshiba’s announcement exemplifies.

The StorageMojo take
Watching how flash and NVRAM interact in the marketplace over the next decade will be instructive for students of technology diffusion. The two technologies are close in some ways, but differ dramatically in others, so simple flash out/NVRAM in stories are will be the exception.

That also ignores the potential creativity of architects and engineers as they explore the capabilities of new kinds of NVRAM. Or the potential for a new class of devices that drive NVRAM adoption, as the smartphone drove flash.

In any case the calculus of the moving target will remain. To the nimble go the spoils.

Courteous comments welcome, of course.

{ 1 comment… read it below or add one }

Paul Riethmuller September 5, 2017 at 2:37 am

Hey Robin,

we worked together at Sun in the good ol’ days when I was the biggest internal customer for Photon – 32 racks of them – and weathered the Vixel GBIC saga.

Like Scooter used to say, the skill is skating to where to where the puck is gonna be.

Cheers,

Paul

Leave a Comment

Previous post:

Next post: