Is STEC’s lead sustainable?

by Robin Harris on Thursday, 27 August, 2009

Chris Mellor of The Register considers whether STEC’s lead in the high-performance SSD space is sustainable.

When competition does arrive in the enterprise SSD market, and as STEC starts competing in the more price-sensitive server flash market, then its early golden days – its first-mover advantages – will come to a close. Prices will drop and it will have to replace lost margins with higher volumes.

The company probably has until mid-2010 at the earliest before that happens.

The innovator’s dilemma
In the book by that title, Clayton Christensen reviews the disk industry’s form factor transitions – despite coining the term “disruptive technology” the only thing that changed was form factors – and found that with very few exceptions – Seagate & IBM – a 2 year lead in a new form factor was all it took to consign former players to oblivion.

How could a mere form factor change trigger mass corporate extinctions? Because in OEM storage a first mover may win the business, but it is relationships and service that keep the business.

It takes time and effort for qual engineers to develop confidence in vendor engineers. Once they do inertia rules. STEC has a big lead and as long their management team executes there is little competitors can do.

An example
Seagate’s first product, a 5 MB 5.25″ hard drive – I had one in a DEC Pro 350 – was pathetic: tiny, slow and noisy. Silly, compared to the fast 9″ drives that were taking the enterprise by storm.

But by the time 9″ vendors shipped 5.25″ drives, Seagate had shown the OEMs they were a quality supplier. So the OEMs only needed an alternate supplier. Dozens exited the disk business in a few years.

The StorageMojo take
There might be room for 3 enterprise SSD vendors, but 2 is more likely. The long-term threat to STEC is architectural: as often noted on StorageMojo, flash does best close to the CPU. Check out Fusion-io.

Of course the array and storage interconnect vendors should be working hard to reduce storage latencies, which works to STEC’s advantage, but this will take 5 years, not 2, to sort out. And the threat to Fusio-io is flash on the mobo, but since almost no one is working on it that is a distant threat indeed.

Long term the real fight is between NAS and DAS. How do the advantages of shared storage stack up against very fast local storage? There’s a place for both of course, but given bandwidth’s slower cost declines DAS – for the first time in years – may have a sustainable advantage.

Courteous comments welcome, of course. Disclosure: I’ve done work for Fusio-io and wish I owned stock. I haven’t done work for STEC, but do own their stock. Neo, what should bake your noodle is: do I own STEC stock because of my analysis; or is the analysis due to my owning their stock? Which came first?

{ 14 comments… read them below or add one }

Mark Jaggers August 27, 2009 at 2:13 pm


What about Intel as a more direct competitor of STEC?

They certainly have the money to further R&D without having the revenue from their SSD products, and since they are also selling direct I would expect them to be able to move down the cost curve faster as they ramp up production and target a broader market segment than just the enterprise disk area.

Robin Harris August 27, 2009 at 2:41 pm

I’m not seeing how Intel’s SSD strategy fits with their core strengths in chips and motherboards. I’m just about to test several Intel X-25s and from everything I’ve seen it is a terrific SSD.

Intel’s obvious – to me – strategy should be to put SSD functionality on server/workstation mobos. Have a couple of flash daughtercard connectors or something into which you’d plug Intel flash boards. They could have a high-performance bus – something faster than PCIe – and a thin driver, and maybe outdo Fusion-io on cost, performance or both.

But Intel is a big company and it wouldn’t surprise if the mobo mavens reject that suggestion out of hand, since they didn’t think of it. But hey! maybe not.

KD Mann August 27, 2009 at 5:03 pm


Great post, but (from the Christensen perspective) I absolutely disagree that Enterprise Flash SSD is a disruptive technology.

Flash SSD improves on Enterprise-class spinning disk along all of existing performance and value dimensions that current mainstream-market customers value, therefore SSD by definition is not a disruptive technology.

For an example, consider that if Seagate were to announce a 90,000 RPM disk (yes, that’s impossible) that was designed to incorporate all of the features that “mainstream market” enterprise customers value (and likewise cost 10x more than 15K disks), this would be — by definition — a sustaining technology. This device would represent an improvement along existing trajectories of evolving customer requirements.

SSD is no different. Just because the magnitude of the performance improvement is large does not make it “disruptive”. Just because its additional reliability is gained by virtue of being solid-state doesn’t make it disruptive either.

According to the Christensen model (which is definitive IMO), Enterprise-class SSD is an improvement along the sustaining-technology trajectory, offering more of what existing customers already value, and at a price-premium that reflects the existing value framework.

In this case, Seagate is doing exactly what they should be doing — waiting until the technology is fully baked. And Chris Mellor makes a good case against STEC in the long term.

Yes, Seagate is famous for “failing” in the face of previous disruptive transitions. They blew it in the transition from 5.25″ to 3.5″ disks (and ultimately needed to acquire Conner to recover). This was precisely because the 3.5″ disks that Conner was building were (as Christensen says) a “crummy product for a poorly defined market”, and Seagate — being a “well managed” company was simply incapable — for all the right reasons — of building crummy products for poorly defined markets. This really is the essence of The Innovator’s Dilemma.

Nobody thinks that enterprise-class SSD is a “crummy product”, and nobody thinks the market for high-performance storage (at which SSD is clearly aimed) is “poorly defined”.

In any sustaining-technology game, the odds historically favor the incumbent(s). Mind you, this does not necessarily mean Seagate, though it could. If I were to pick a winner now, I’d put some serious money on Micron and their Fusion-IO clone PCIe card . The Flash storage game looks a lot like the DRAM business. What possible reason there could be for Flash to live behind a block-device interface in the long term vs. sitting on the system bus, or on the server mother-board for that matter? I think we agree that Fusion-IO has the right architecture, but Micron owns the foundries. Whoever wins this game will own, or control, lots of silicon foundry capacity.

Here’s another theory that I think explains both Seagate’s “lateness” to this market as well as the reasons why the STEC founders sold off the majority of thier personal holdings last week. I believe that Steven Hetzler of IBM’s Almaden Research labs (Jan-09) and Sandisk CTO Eli Harari (two weeks ago at the Flash Summit) have absolutely nailed it. The economics of building silicon foundries simply does not support the kind of continued rapid declines in NAND flash pricing that everyone agrees are essential for serious SSD penetration — beyond about 1% of the enterprise HDD market.

Maybe that’s why Seagate is so late to this game.

Jean August 27, 2009 at 9:58 pm

You need more than just hardware to create a disruptive technology. In our case here I think file system is also part of the disruption. Take a look at what Sun ZFS can do with SSD… ZFS contains two caches (Read and Write) that can be parameter to help speed by leveraging SSD. That is what they do with their Unified Storage. Nothing prevent anyone else to do the same within their servers or storage. Linux and other file systems marker are working on similar feature. Transparent Data placement is key to performance.

Both Fusion IO and STEC will have more competition soon or later. Samsung, Intel, Micron and Toshiba are working hard. One day they will deliver something to kill our current disk technology.

I think SSD will be a disruptive technology regardless what some people say. It will replace spinning disk first (less than 5 years) and later it will be part of the memory systems using special bus (early next year) and finally replace tape media when price approach physical tape medium cost (>5 years).

Latest SD card support up to 2TB addressable storage at up to 300M/sec. Also promise to be more reliable. I expect to see 2TB capacity by end of next year or early 2011. Let see where that technology bring to consumers market. It will force enterprise price to go down.

Roland Bavington August 28, 2009 at 2:10 am

The article and some of the comments above ignore the reasons why storage now sits outside the server.
SSD on the motherboard or elsewhere within the Server constrains the usefulness of that storage to applications that can run on or access that server. It also has to be replaced when that server is refreshed and being within the server makes it awkward to protect for Business continuity or Disaster recovery.
I think that there is no doubt that Fusion IO and others we have not heard of will make lots of money putting Flash close to the processor for the right applications but SSD on the SAN, masquerading as a block device will live on as long as disks are the principle method of storing data.
An interesting debate is what happens when SSD prices and adoption in centralised storage is ubiquitous. Will we see centralised devices being addressed by some sort of RDMA process and multi-terabyte disk drives pretending to be Flash storage? Could SSD drive broader adoption of technologies like InfiniBand that include RDMA calls in the base design?

KD Mann August 28, 2009 at 7:12 am


As far as “disruptive” goes, I’m working from Clayton Christensen’s definition (which is the only one there is, as far as I know). From that perspective, Enterprise SSD is absolutely not a disruptive technology.

In Christensen’s chart found on the page below, Enterprise SSD is that big step along the upper (sustaining technology) trajectory, right in the middle of the chart.

Now, are there applications of NAND Flash technology that might truly be disruptive? I think there are. Here’s a stunning example from Intel (Pittsburgh) research, presented at this year’s SIGMOD conference:

“We further compare our design with one that uses Solid-State Drives (SSDs), and find that although SSDs improve logging performance, multiple USB flash drives can achieve comparable or better performance with much lower price.”

In the paper above, Intel shows that the combination of a cheap HDD and a couple of USB Thumb Drives outperforms (by a wide margin) Intel’s own X-25 SSD for a TPC-class “Tier-zero” application (handling the Transaction Log File), at a tiny fraction of the cost.

Now that’s one helluva disruptive innovation!

I don’t think Enterprise customers are going to start embracing thumb-drives for transactional database acceleration anytime soon, but this research shows what the ultimate future of NAND Flash in the enterprise looks like — much smaller and a lot less profitable than most people currently believe.

Finally, to your comment about SSD replacing HDD…the point that Hetzler and Harari have recently made (and the former has shown convincingly, IMO) is this: the fundamental realities (hard costs) of building silicon foundries make it economically impossible for Flash to ever replace disk, or even to make a substantial (>1%) dent in the market for HDD in any foreseeable future.

Steven Hetzler’s presentation is below, and Harari’s is available from the Flash Summit website.

Greg Schulz August 28, 2009 at 9:14 am


What if TMS were public, what would their stock look like and who would own it, granted and with all due respect to Woody, Jamon Phil and the crew they are not the over the top marketing and hype machine as some in the space.

Interesting how others like Curtis who have been around for eons have been almost off the radar, however as we have seen with previous SSD cycles, some like TMS stick around (disclosure I did some work for TMS a couple of years ago however am not currently, have never done work for Curtis) while others disappear (Imperial, DEC/Compaq/HP eg, ESE20 of which you were on the marketing end, I was one of the launch customers at the time).

Ok, with that out of the way, on the surface I would agree that SSD (ram or flash) are not part of Intel’s core business of building chips.

However, building mother boards is also not part of Intel’s core business although it provides a means of showing, or getting chips into the market place and eco-system. Hence Intel doing a line of flash or ram based SSD makes a lot of sense as a means for some of their customers to get their technologies as a turnkey, or, to partner with others where Intel supplies the technology for resulting products, or, as Intel does today, leverage partners technology as part of their own chips and/or solutions.

Let’s also not forget that Intel has a history in addition to providing white box servers/blades/boards, of providing white box storage (block and file, DAS and NAS) solutions that combine their own with partners IP, why should SSD be any different.

If the SSD market is really going to be big this time around as has been predicted by some, there will have to be at least two volume component vendors, maybe three near-term, along with many different solution/system providers followed by some shakeout consolidation.

Here’s the wrinkle though, that is many of the wild over the top SSD adoption forecasts are predicated on flash prices dropping while capacity increases, something that is also happening with hard disk drives and even tape. The challenge is this, in order to meet the pricing inflection point to cause a massive tipping point; can two, let alone one manufacture afford to produce the chips? Some forecasts have been circulating in various news reports that manufactures will not be able to produce flash chips at the price points needed to cause the tipping point.

Is this raining on the SSD parade? Well, some might think so, however in general it does not as we have seen in the past.

SSD both ram and flash continue as they have for decades to have a bright future both in shared storage systems (e.g. EMC, IBM, TMS and everyone using/announcing STEC based solutions), dedicated external direct attached, internal dedicated adapter based/assisted (fusionio, tms, adaptec, lsi, etc) as well as device/component level like those from STEC, Curtis, Intel and many others.

Cheers gs
Greg Schulz

Wes Felter August 28, 2009 at 12:09 pm

Robin, look at Intel’s Braidwood. I suspect it will be substantially slower than X25, but if we believe the thumb drive lesson that may be OK.

If there is any disruptive possibility here, it’s probably “consumer” flash SSDs sneaking their way into storage systems. The current “STEC or nothing” attitude of the major vendors leaves many low end and midrange storage customers unserved.

Wes Felter August 28, 2009 at 2:54 pm

Oh, never mind:

The world may never know.

KD Mann September 1, 2009 at 12:25 pm

To Wes’s comment above, Braidwood is apparently alive and well.

“Intel technology poses threat to SSDs”

“Intel Braidwood Technology May Sap SSD Demand”

To all the array vendors out there betting their next two years on profitable SSD uptake among their customers; think again (and quickly). The days of NAND Flash parts sitting behind a block-device interface are over (if they ever really got here in the first place).

I think Greg put it best in his post above:

“SSD (both ram and flash) will continue as they have for decades to have a bright future…”

Thanks Greg!!!

Lyman September 1, 2009 at 5:30 pm

> The days of NAND Flash parts sitting behind a block-device interface are over
Braidwood still put the flash behink a block-device interfacee since it appears it is serving just as much of a cache function as the flash SSD in the arrays. It is matter of where the cache is placed . If have multiple computers hitting the same logical device makes more sense to put it on the array.

The physically mimicking the legacy block devices.

Yeah, that is toast. It is whether the ‘D’ in SSD is more so for “storage device” or “physically conforms to storage device” . Where the “device” is the more abstract definition Braidwood is a SSD… kind of hard to make them disappear when you are one. 😉

Part of the mimicking the form factor though give the ability to install a RAM cache on top of the flash. If don’t have that (in software drivers or into logical presentation from the “box” ) However, dumping the “bulk” of the physical containers can be a minor inflection point (can be placed where couldn’t be before. )

KD Mann September 24, 2009 at 4:57 pm


Sorry…I was not explicit enough above, and the result is confusion now between “block level abstraction” and “block device interface”.

Indeed, Braidwood relies on a “block level abstraction”, but only because this is the fundamental nature of NAND Flash. Unlike DRAM, all Flash is organized around a block-level abstraction. When I said:

“The days of NAND Flash parts sitting behind a block-device ==interface== are over…”

I was referring to the lame idea of putting NAND Flash on an SSD, as an I_T_L_Q “target” device in the Fibre Channel, SAS, SATA, or other ANSI T-10 context.

KD Mann

KD Mann February 26, 2010 at 4:36 pm

STEC has a lead alright — they are the first NAND SSD maker to go over the cliff…

If EMC can’t sell NAND as an enterprise storage technology, who on earth can? Lawyers are now going after STEC for hiding the underlying performance issues of NAND.

“…In fact, however, the problem was not a lack of sales force effort or consumer knowledge about SSDs, but consumer resistance to purchasing the ultimate product due to its cost, performance, and flexibility of use.”

In the history of computing there has never once been a successful on-line data storage technology based on underlying media that writes 100X slower than it reads. Imagine…it now turns out that the problem of “I/O density” is about both the “I” and the “O”.

Let’s hope that this hard lesson is one we learn from — we need to get on with the legitimate business of finding a real solution to the I/O density problem. And NAND ain’t It.

anasana December 14, 2010 at 10:57 am

>> An example Seagate’s first product, a 5 MB 5.25″ hard drive – I had one in a DEC Pro 350 – was pathetic: tiny, slow and noisy.< < Hello all! I was rewrite an emulator of DEC Pro 350 series for WinXP/7 and have a lot of option cards prom dumps (MEMory, TMS, DECNA, MSDOS) and looking for the PRO/CPM Z80 card and IVIS diagnostic rom dumps. I need also the software on diskettes for DEC Pro 325/350/380. If You have some or use this real machine, please anytime write to me on e-mail:
Thanks, Borodin Oleksiy.

Leave a Comment

{ 1 trackback }

Previous post:

Next post: