Flash slaying the latency dragon?

by Robin Harris on Thursday, 13 August, 2015

I’m at the Flash Memory Summit this week. Two observations, one short and one long.

Short first.
FMS is much larger than last year. Haven’t seen numbers, but the huge exhibit floor looks like a first rank storage show. VMworld has been a top storage show, but now they have competition.

Has latency met its match?
Can’t talk about the interesting startups in the area, but an HGST technology demo – i.e. no plans for a product – gives a flavor of what’s to come. They hooked up two servers sporting Infiniband and NVMe SSDs running an I/O test and show latencies of ≈2.5µsec for remote drive access.

That’s right: microseconds.

The StorageMojo take
But the interesting startups are claiming even lower numbers. Not clear what these numbers might mean in the real world – they don’t include the problematic latencies of our antiquated storage stack – but the direction is encouraging.

More later.

Courteous comments welcome, of course.

{ 2 comments }

Why it’s time to forget about flash

by Robin Harris on Thursday, 30 July, 2015

Post sponsored by Infinidat

Flash has revolutionized storage, but the industry has lost sight of the customer problem: optimizing storage for availability, performance, footprint, power and cost. Industry analysts aren’t helping.

Let’s forget about flash and remember why we’re here
Gartner recently defined a Solid State Array segment. That was questioned by Chris Mellor at the Register, and he got a spirited, if unconvincing, defense by Gartner’s Joe Unsworth. The basic issue is simple: are markets defined by technology or application?

First principles
Markets are the aggregate of buyer’s decisions looking at vendor offers. Since IT buyers are a herd animal – one of Geoffrey Moore’s key points in Crossing the Chasm – knowing what others are choosing is helpful for risk-averse buyers.

That’s where well-defined market segmentation and analysis can help buyers. That’s also where technology-defined segments – like Gartner’s SSA segment or IDC’s munging of object and file storage – fail buyers.

As I said then:

Done well market segmentation helps to reveal the underlying dynamics of marketplace activity.

As some commenters then noted, it makes sense to segment by customer need, not technology. But when major vendors want bragging rights, Gartner and IDC are happy to oblige with a flattering but useless technology segmentation.

What defines a segment?
Technologies don’t define market segments. Take Ethernet.

Ethernet started as CSMA-CD (Carrier Sense, Multiple-Access/Collision Detection) and has evolved through different technologies as speeds have increased. Because the application – local area networking – didn’t change, the fact that technology was radically altered made no difference in the market segment.

Application trumps technology
Instead of technology, most product segments are defined by application use: what does the product do for the buyer? In the case of enterprise arrays, the defining characteristics are availability, performance and management.

All-flash arrays (AFA): segment or technology? Companies have hyped SSD-based AFAs as an I/O panacea.

But architecture and implementation still matter. An engineer is someone who do for a nickel what any fool can do for a dollar. Smart choices and quality implementation trump technological determinism every day.

The storage pyramid is an economic fact, not a technical choice. If we had an extremely fast, non-volatile and cheap technology, the storage pyramid would collapse into a single layer.

Our toolkit

  • DRAM: fast, with byte addressability and unlimited life, but expensive and power hungry.
  • Flash: fast reads, slow writes, large block addressing, limited life, but cheaper than DRAM and more power efficient.
  • Disk: limited IOPS, good bandwidth for streaming, small block addressibility, unlimited writes, non-volatility, and low cost.

The StorageMojo take
Since the storage pyramid is economic, engineering decisions behind storage architectures are economic too. Buyers should look for the array that offers the highest performance and availability at the best total cost, rather than assuming that any particular technology will offer the “best” solution.

As technologists we are primed to reach for the “solution” be it a pill, a policy or a product. But modern data centers are a bundle of problems and constraints. Flash performance, while helpful, is not a magic bullet.

What we can do is choose the most flexible infrastructure that fits our workloads, data center and budget. Workloads still exhibit locality of reference, both temporally and spatially, so most storage systems – even “all-flash” arrays – use DRAM for caching hot data.

Redundancy – in data and hardware – is the foundation of availability. High density and low power consumption help overstuffed data centers breathe easier. Scalability is important too, since whatever tomorrow brings, there will be a lot of it.

And, finally, affordability, the key to any business use of technology. And when it comes to storage, hard drives are still the lowest cost option for active data.

The bottom line is that storage buyers have more choices than ever thanks to new technologies. But don’t buy hype: buy the combination of features and capabilities that is the best fit for your needs. Infinidat believes they’ve come up with a high-performance array whose modern architecture and triple redundancy puts them well above traditional legacy arrays – and I’m inclined to agree.

Courteous comments welcome, of course. This is StorageMojo’s first sponsored post in 10 years. Your thoughts on Infinidat and/or sponsored posts?

{ 1 comment }

NVM is off to the races

by Robin Harris on Wednesday, 29 July, 2015

With the Intel/micron nonvolatile memory announcement – said to be in production today – the race to produce next generation non-volatile memories has gotten serious. And not a moment too soon!

What did they announce?
You can read the press release here.

Key quote:

3D XPoint technology combines the performance, density, power, non-volatility and cost advantages of all available memory technologies on the market today. The technology is up to 1,000 times faster and has up to 1,000 times greater endurance3 than NAND, and is 10 times denser than conventional memory.

And it’s byte-addressable.

Courtesy Intel/Micron

Courtesy Intel/Micron


It looks like they’re productizing the Crossbar RRAM. I’ve written about Crossbar’s technology on StorageMojo and ZDNet.

According to Crossbar their memory cell uses a metallic nano-filament in a non-conductive layer that can built on current CMOS fabs. Since Crossbar’s business model is to license their technology, as ARM does, Intel/Micron could use their technology.

Even Intel’s name “3D XPoint” – pronounced “3D crosspoint” – sounds like Crossbar’s nomenclature.

Other Crossbar stats:

  • They’ve fabricated cells down to 8nm feature sizes.
  • 20x faster writes than flash.
  • 10 year retention at 125F.
  • Up to 1TB capacity per chip.

Clearly, I/M did not echo these stats, which gives me pause. But hey, I/M has smart guys too, so enhancing a licensed technology is likely.

Production?
The press release claims that the new memory is in production. But where?

3D Xpoint™ technology wafers are currently running in production lines at Intel Micron Flash Technologies fab

So they haven’t cranked up a $5B fab to produce this yet. Current production is for sampling later this year with no date for products based on the technology.

The StorageMojo take
Whether this is Crossbar’s technology or not, this is great news for the storage industry. NAND flash has notable deficits as a storage technology, and 3D XPoint addresses those.

But one advantage NAND flash has – and will retain for the foreseeable future – is cost. While the Crossbar technology offers a small feature size – one of flash’s deficits – and potentially a lower cost per bit than flash, it will take years for those advantages to be reflected in device costs.

Nor can 3D XPoint expect to replace DRAM. It’s faster than NAND, but only devices that don’t need DRAM performance will be candidates for 3D XPoint.

But this announcement reinforces the need for the industry to fix the outdated, disk-oriented, software stack that is holding back I/O performance. The Intel/Micron announcement should focus architects on this vital issue.

Courteous comments welcome, of course.

{ 0 comments }

The storage tipping point

by Robin Harris on Monday, 29 June, 2015

Storage is at a tipping point: much of the existing investment in the software stack will be obsolete within two years. This will be the biggest change in storage since the invention of the disk drive by IBM in 1956.

This is not to deprecate the other seismic forces of flash, object storage, cloud and the newer workloads that are driving investment in scale-out architectures and no-SQL databases. But the 50 years of I/O stack development – based on disks and, later, RAID – is essentially obsolete today, as will become obvious to all very soon.

Why?
In a nutshell, the performance optimization technologies of the last decade – log structured file systems, coalesced writes, out-of-place updates and, soon, byte-addressable NVRAM – are conflicting with similar-but-different techniques used in SSDs and arrays. Case in point: Don’t stack your Log on my Log, a recent paper by Jingpei Yang, Ned Plasson, Greg Gillis, Nisha Talagala, and Swaminathan Sundararaman of SanDisk.

Log structured writes are written to free space at the “end” of the free space pool, as if the free space were a continuous circular buffer. Stale data must be periodically cleaned up and its blocks returned to the free space pool – the process known as garbage collection.

Log structured file systems and the SSD’s flash translation layer (FTL) both use similar techniques to improve performance. But one has to wonder: what is the impact of two or more logs on the total system? That’s what the paper addresses.

The paper explores the impact of log structured apps and file systems running on top of log structured SSDs. In summary:

We show that multiple log layers affects sequentiality and increases write pressure to flash devices through randomization of workloads, unaligned segment sizes, and uncoordinated multi-log garbage collection.

How bad?
The team found several pathologies in log-on-log configurations. Readers are urged to refer to the paper for the details. Here are the high points.

  • Metadata footprint
    Each log layer needs to store metadata to keep track of physical addresses as they append new data. Many log layers support multiple append streams, which, they discovered, has important negative effects on the lower log. File system write amplification could increase by much as 33% as the number of append streams went from 2 to 6.
  • Fragmentation
    When two log layers do garbage collection, but at different segment sizes and boundaries, the result is segment size mismatch, which creates additional work for the lower layer. When the upper layer cleans one segment, the lower layer may need to clean to two segments.
  • Reserve capacity over-consumption
    Each layer’s garbage collection requires consumption of reserve capacity. Stack the GC layers and more storage is used.
  • Multiple append streams
    Multiple upper layer append streams – useful for segregating different application update frequencies – can cause the lower log to see more data fragmentation
  • Layered garbage collection
    Each layer’s garbage collection runs independently, creating multiple issues, including:
    • Layered TRIMs. TRIM at the upper layer doesn’t reach the lower layer, so the lower layer may have invalid data it assumes is still valid.
    • GC write amplification. Independent GC can mean the lower layer cleans a segment ahead of the upper layers, causing re-writes when the upper layer communicates its changes.

The StorageMojo take
Careful engineering could solve the log-on-log problems, but why bother? I/O paths should be as simple as possible. That means a system, not storage, level attack on the I/O stack.

50 years of HDD-enabling cruft won’t disappear overnight, but the industry must get started. Products that already incorporate log structured I/O will have a definite advantage adapting to the brave new world of disk-free flash and NVM memory and storage.

Storage is the most critical and difficult problem in information technology. In the next decade new storage technologies will enable a radical rethink and simplification of the I/O stack beyond what flash has already done.

Six months ago I spoke to an IBM technologist and suggested that the lessons of the IBM System 38 – which sported a single persistance layer including RAM and disk – could be useful today. He hadn’t heard of it.

The SanDisk paper doesn’t directly address latency, but that’s the critical element in the new storage stack. Removing multiple log levels won’t optimize for latency, but it’s a start.

Courteous comments welcome, of course.

{ 7 comments }

Hike blogging: the Twin Buttes loop

by Robin Harris on Sunday, 21 June, 2015

Summer finally arrived in Northern Arizona, about 6 weeks later than usual. Good news: no wildfires, thanks to lots of rain. Bad news: I was freezing!

I got out to the Twin Buttes before 7am – and was a little late. Shade is a valuable commodity in the Arizona summer!

In another 10 days or so the summer monsoon will start, my favorite time of year. It will still be hot, but moist air from the Gulf of California arrives with plenty of clouds and thunderstorms. The rain ends the annual wildfire season, while the clouds dapple the already dramatic landscape in ever-changing patterns.

The Twin Buttes Loop is about 6 miles and relatively flat: just a few hundred feet of ascent. But the scenery is anything but flat.

Courthouse and Bell rocks as seen from Chicken Point:

Click to enlarge.

Click to enlarge.

The rocks are a popular mountain biking destination, as the sign in this picture suggests. The double black diamond is for bikers, not hikers. Having hiked the trail, I can assure you that there are places where once you start you are committed to a rapid descent whether you are still on your bike on not.

Click to enlarge.

Click to enlarge.

Finally, another picture: Spring in the Desert. The fruiting body of an agave plant is on the left. These stems shoot up several inches a day and present their flowers for pollination and then, seed distribution. The stem is so large that they can cause the whole plant to capsize, ripping its roots out of the ground.

The desert isn’t easy.

Click to enlarge.

Click to enlarge.

The StorageMojo take
May you always walk in beauty.

Courteous comments welcome, of course.

{ 0 comments }

Can NetApp be saved?

by Robin Harris on Wednesday, 17 June, 2015

If NetApp is going to save itself – see How doomed is NetApp? – it needs to change the way it’s doing business and how it thinks about its customers. Or it can continue as it is and accelerate into oblivion.

NetApp’s problem
NetApp is essentially a single-product line company, and that product line is less and less relevant to customer needs. There’s faster block and SAN storage and much cheaper object storage in the cloud and on-prem. NetApp is in a sour spot, not a sweet one.

Here’s what NetApp needs to do to regain momentum.

Embrace multiple product lines. OnTap, while competitive in no growth legacy applications, is not competitive with modern scale out object storage systems. NetApp needs more arrows in its quiver.

NetApp could learn from EMC, a company that has developed almost no products – Atmos is the exception that comes to mind – itself in the last 20 years. Instead, EMC buys sector leaders and pushes them through its enterprise sales channel. Both Isilon and Data Domain have major architectural flaws compared to more modern products, but EMC’s sales clout wins the day.

Embrace scale out storage. NetApp made a brilliant move when they purchased object storage pioneer Bycast. The Canadian company suffered from timid marketing thanks to a traditional Canadian reluctance to toot one’s own horn. Overcommitment to CDMI hasn’t helped either.

But Bycast had a strong foothold in medical imaging and a great story: a Bycast installation survived Hurricane Katrina in New Orleans without any data loss. Haven’t heard that story? You, and everyone else.

Buy Avere. Avere’s product is an intelligent front end cache for multiple NetApp filers. It simplifies filer management by keeping hot data local and eliminating the need to balance hot files across multiple filers.

But when you buy it, don’t try to integrate it with OnTap. It is a network device, and needs to be sold as such.

Pump up the channel. Easier said than done, but NetApp has to get ready for a lower margin future, and embracing the channel is the easiest way to start. More will need to be done – products that need less support thanks to automated support for instance – but getting lean and mean is table stakes for our brave new storage world.

The StorageMojo take
Despite the fact that NetApp stopped talking to me several years ago – except for a recent briefing invite – I still like them and wish them well. Thus this advice.

With a well-regarded global brand and a broad enterprise presence, NetApp has assets that startups can only dream of. But so did DEC, Sun and Kodak, and bad management frittered those assets away.

NetApp’s urgently needs a strategy reboot. Whether the new management team is up to the task remains to be seen. I hope they are.

Comments welcome, as always.

{ 7 comments }

Why it’s hard to meet SLAs with SSDs

June 3, 2015

From their earliest days, people have reported that SSDs were not providing the performance they expected. As SSDs age, for instance, they get slower. But how much slower? And why? A common use of SSDs is for servers hosting virtual machines. The aggregated VMs create the I/O blender effect, which SSDs handle a lot better […]

8 comments Read the full article →

Make Hadoop the world’s largest iSCSI target

June 1, 2015

Scale out storage and Hadoop are a great duo for working with masses of data. Wouldn’t it be nice if it could also be used for more mundane storage tasks, like block storage? Well, it can. Some Silicon Valley engineers have produced a software front end for Hadoop that adds an iSCSI interface. The team […]

2 comments Read the full article →

Hospital ship Haven in Nagasaki, Japan, 1945

May 25, 2015

StorageMojo is republishing this post to mark this Memorial Day, 2015. In a few months we will be marking the 70th anniversary of the end of World War two as well. My father was a career Navy officer and this is a small part of his legacy. See the original post for the comments, many […]

0 comments Read the full article →

No-budget marketing for small companies

May 13, 2015

You are a small tech company. You have a marketing guy but it’s largely engineers solving problems that most people don’t even know exist. How do you get attention and respect at a low cost? Content marketing. When most people think about marketing, they think t-shirts, tradeshows, advertising, telephone calls, white papers and brochures. Those […]

0 comments Read the full article →

Hike blogging: Sunday May 10 on Brins Mesa

May 11, 2015

The Soldiers Pass, Brins Mesa, Mormon Canyon loop is my favorite hike. It has about 1500 feet of vertical up to over 5000 ft and the combination of two canyons and the mesa means the scenery is ever changing. This shot is taken looking north from the mesa to Wilson Mt. It was a beautiful […]

0 comments Read the full article →

FAST ’15: StorageMojo’s Best Paper

May 11, 2015

The crack StorageMojo analyst team has finally named a StorageMojo FAST 15 Best Paper. It was tough to get agreement this year because of the many excellent contenders. Here’s a rundown of the most interesting before a more detailed explication of the winner. CalvinFS: Consistent WAN Replication and Scalable Metadata Management for Distributed File Systems […]

5 comments Read the full article →

EMC II’s ragged last quarter

April 27, 2015

As reported in a Seeking Alpha quarterly call transcript, EMC’s storage unit had a $75 million shortfall in Q1. CEO Joe Tucci said . . . we were disappointed that we fell a bit short of our Q1 revenue plan, approximately $75 million short. This $75 million revenue shortfall occurred in our storage business. That […]

3 comments Read the full article →

How doomed is NetApp?

April 13, 2015

The current turmoil caused by plummeting cloud storage costs, new entrants sporting modern architectures and the forced re-architecting due to flash and upcoming NV memories is a perfect storm for legacy vendors. Some are handling it better than others, but some, like IBM and NetApp, appear to be sinking. NetApp is signalling that their 2015 […]

38 comments Read the full article →

EMC’s DSSD hiring is exploding

February 18, 2015

DSSD, the Valley startup acquired by EMC last year (see EMC goes all in with DSSD) is continuing to hire at an accelerating rate. Informed sources put the current DSSD team at 160 heads with plans to grow it to 800 over the next year. This is a program in a hurry. Hiring such numbers […]

7 comments Read the full article →