Hike blogging: Cibola trail

by Robin Harris on Thursday, 24 September, 2015

Fall is finally starting in the high desert. October is a wonderful month for a visit.

This picture, taken from near the top of the Cibola trail, imperfectly captures one of my favorite views. Several vertical rocks and spires create a tense, 3D arrangement of unpeopled skyscrapers that never fails to enthrall. I haven’t figured out how to capture the feeling in 2D, but this comes close:

Click to enlarge.

Click to enlarge.

Summer is a rainy season here – known as the monsoon – and for the first time in ≈7 years, no part of Arizona is in drought. That explains the clouds and the sun-dappled landscape in the picture.

Cibola trail is part of a 6 mile loop that is my favorite hike. Perhaps you can see why.

{ 1 comment }

Inifinite io’s network storage controller: what is it?

by Robin Harris on Wednesday, 23 September, 2015

Infinite io’s Network Storage Controller (NSC) is a rarity in enterprise storage: an original and unique device. It turns your file storage network into a software defined resource. But it’s not a file server, a caching controller or an intelligent front end to existing storage resources.

The NSC differs by applying deep packet inspection and analytics to network storage. First, the NSC scans all back-end file systems and storage to understand network layout and storage options. Then the NSC’s ultra-low latency layer 7 proxy uses wire-speed deep packet inspection to see all storage network traffic.

This approach to file storage that gives the NSC some unique capabilities:

  • The NSC sits in the network, between servers and storage without changing the network layout.
  • It works at wire speed, so it doesn’t affect throughput.
  • Unlike traditional storage virtualization, it is transparent to the network – if it fails it automatically passes through all packets to the local file storage, with applications and file servers unaffected.
  • The NSC manages storage traffic with much more granularity than array controllers, while also supporting multiple back end file servers, including cloud storage.

Cloud integration
The seamless integration of cloud resources is the NSC’s strength. Cloud storage is considerably less costly than local storage arrays, but concerns about security, availability and performance give many IT pros pause. How does the NSC manage these issues?

Files are broken up into sniblets or chunks before being compressed, encrypted and moved to cloud object storage. Each sniblet has its own key, so even if an attacker gathered all the sniblets of a file, they’d have to decrypt each one and then piece the file together. You can place the sniblets on multiple cloud providers to make attackers work even harder.

In normal operation, the NSC handles all metadata operations from its internal flash storage, including cloud data and all cloud-based data appears to be local to applications. When a cloud-stored file is requested, metadata is served locally while the NSC begins streaming the data.

All metadata and state information, are stored in the flash as well as – at your option – in external local and/or in cloud storage. A server app can access the state data if the NSC fails, enabling quick recovery from hardware failures. Encryption keys are stored locally in a tamper-proof TPM chip for quick recovery and added security, and that chip can be backed up as well.

A single file’s sniblets can be placed across multiple cloud providers, enabling parallel access to the file. File and sniblet level ECC enables files to be rebuilt before all sniblets are downloaded – handy in case a service is down or slow.

What makes the cloud integration so powerful is that you define what files get moved to the cloud – based on activity, age, size or priority – and the process is entirely transparent to any application that uses file storage.

The StorageMojo take
I’m often underwhelmed by those applying network technologies to storage. Networks work with copies; storage with originals. Those are two very different worlds when data needs to be recovered – and in the strategies needed to minimize the need for recovery.

Assuming the NSC works as advertised, it has important advantages over competing front ends, such as Avere, because, for instance, you can deploy it in stages. In display mode it surveys your storage and estimates how much you could save by moving cold data to the cloud.

If that feels good, move to metadata mode, where the NSC accelerates metadata operations using its internal flash, while passing through all updates. Finally, switch on access to public or private cloud storage, choose your file migration policies, and start taking advantage of the cloud’s economies of scale.

Courteous comments welcome, of course. I’d love to hear from people who’ve tried this device. Please provide enough info – which I can keep confidential – so I can be sure you’re real.

Note: This post is based on a white paper I wrote for infinite io, but the opinions are my own.


Throwing hardware at a software problem

by Robin Harris on Friday, 28 August, 2015

Maybe software will eat the world, but sometimes the physical world gives software indigestion. That fact was evident at the Flash Memory Summit this month.

As mentioned in Flash slaying the latency dragon? several companies were showing remote storage accesses – using NVMe and hopped up networks – in the 1.5 to 2.5µsec range. That’s roughly 500 times better than the ≈1msec averages seen on today’s flash storage.

That’s amazing. Really. But will it help?

Fusion-io investigated sharing storage – to help amortize the cost of their PCIe cards across multiple servers – for years, but the tools weren’t there to make it work. Now, with NVMe and PMC’s Switchtec PSX PCIe Gen3 or Avago’s ExpressFabric PCIe storage switches the hardware tools are there.

The software problem
But lopping off 998µsec from storage I/O isn’t the boost we’d like, because the storage stack is so freakin’ sl-o-o-w-w. How slow?

In a recent record-setting SPC-2 benchmark, an EMC VMAX 400k achieved 3.5ms response time with 64KiB transfers, 800 streams, and 4 IOs per stream.

However, looking at a recent TPC-C benchmark – which are at the application level, not the storage device level – we see that minimum response times are 110ms with maximum response times of almost 10 seconds. Clearly, 1ms doesn’t make much difference.

Granted, the TPC-C results include application and database overhead – in this case SAP – not just the storage stack. But with all-flash arrays averaging under 1ms SPC response times, performance improvements need to come from the software.

The StorageMojo take
The yawning chasm between SPC and TPC results calls into question the value of the SPC benchmarks. Great for vendor’s “plausible deniability” when customers complain about performance. But as such a small portion of total latency it’s obvious that software – and perhaps server hardware – are key to reduced latency and higher performance.

Software may be eating the world, but the days of easy performance boosts from new CPUs are over. Software has to step up to improve performance.

Courteous comments welcome, of course.


Flash slaying the latency dragon?

by Robin Harris on Thursday, 13 August, 2015

I’m at the Flash Memory Summit this week. Two observations, one short and one long.

Short first.
FMS is much larger than last year. Haven’t seen numbers, but the huge exhibit floor looks like a first rank storage show. VMworld has been a top storage show, but now they have competition.

Has latency met its match?
Can’t talk about the interesting startups in the area, but an HGST technology demo – i.e. no plans for a product – gives a flavor of what’s to come. They hooked up two servers sporting Infiniband and NVMe SSDs running an I/O test and show latencies of ≈2.5µsec for remote drive access.

That’s right: microseconds.

The StorageMojo take
But the interesting startups are claiming even lower numbers. Not clear what these numbers might mean in the real world – they don’t include the problematic latencies of our antiquated storage stack – but the direction is encouraging.

More later.

Courteous comments welcome, of course.


Why it’s time to forget about flash

by Robin Harris on Thursday, 30 July, 2015

Post sponsored by Infinidat

Flash has revolutionized storage, but the industry has lost sight of the customer problem: optimizing storage for availability, performance, footprint, power and cost. Industry analysts aren’t helping.

Let’s forget about flash and remember why we’re here
Gartner recently defined a Solid State Array segment. That was questioned by Chris Mellor at the Register, and he got a spirited, if unconvincing, defense by Gartner’s Joe Unsworth. The basic issue is simple: are markets defined by technology or application?

First principles
Markets are the aggregate of buyer’s decisions looking at vendor offers. Since IT buyers are a herd animal – one of Geoffrey Moore’s key points in Crossing the Chasm – knowing what others are choosing is helpful for risk-averse buyers.

That’s where well-defined market segmentation and analysis can help buyers. That’s also where technology-defined segments – like Gartner’s SSA segment or IDC’s munging of object and file storage – fail buyers.

As I said then:

Done well market segmentation helps to reveal the underlying dynamics of marketplace activity.

As some commenters then noted, it makes sense to segment by customer need, not technology. But when major vendors want bragging rights, Gartner and IDC are happy to oblige with a flattering but useless technology segmentation.

What defines a segment?
Technologies don’t define market segments. Take Ethernet.

Ethernet started as CSMA-CD (Carrier Sense, Multiple-Access/Collision Detection) and has evolved through different technologies as speeds have increased. Because the application – local area networking – didn’t change, the fact that technology was radically altered made no difference in the market segment.

Application trumps technology
Instead of technology, most product segments are defined by application use: what does the product do for the buyer? In the case of enterprise arrays, the defining characteristics are availability, performance and management.

All-flash arrays (AFA): segment or technology? Companies have hyped SSD-based AFAs as an I/O panacea.

But architecture and implementation still matter. An engineer is someone who do for a nickel what any fool can do for a dollar. Smart choices and quality implementation trump technological determinism every day.

The storage pyramid is an economic fact, not a technical choice. If we had an extremely fast, non-volatile and cheap technology, the storage pyramid would collapse into a single layer.

Our toolkit

  • DRAM: fast, with byte addressability and unlimited life, but expensive and power hungry.
  • Flash: fast reads, slow writes, large block addressing, limited life, but cheaper than DRAM and more power efficient.
  • Disk: limited IOPS, good bandwidth for streaming, small block addressibility, unlimited writes, non-volatility, and low cost.

The StorageMojo take
Since the storage pyramid is economic, engineering decisions behind storage architectures are economic too. Buyers should look for the array that offers the highest performance and availability at the best total cost, rather than assuming that any particular technology will offer the “best” solution.

As technologists we are primed to reach for the “solution” be it a pill, a policy or a product. But modern data centers are a bundle of problems and constraints. Flash performance, while helpful, is not a magic bullet.

What we can do is choose the most flexible infrastructure that fits our workloads, data center and budget. Workloads still exhibit locality of reference, both temporally and spatially, so most storage systems – even “all-flash” arrays – use DRAM for caching hot data.

Redundancy – in data and hardware – is the foundation of availability. High density and low power consumption help overstuffed data centers breathe easier. Scalability is important too, since whatever tomorrow brings, there will be a lot of it.

And, finally, affordability, the key to any business use of technology. And when it comes to storage, hard drives are still the lowest cost option for active data.

The bottom line is that storage buyers have more choices than ever thanks to new technologies. But don’t buy hype: buy the combination of features and capabilities that is the best fit for your needs. Infinidat believes they’ve come up with a high-performance array whose modern architecture and triple redundancy puts them well above traditional legacy arrays – and I’m inclined to agree.

Courteous comments welcome, of course. This is StorageMojo’s first sponsored post in 10 years. Your thoughts on Infinidat and/or sponsored posts?


NVM is off to the races

by Robin Harris on Wednesday, 29 July, 2015

With the Intel/micron nonvolatile memory announcement – said to be in production today – the race to produce next generation non-volatile memories has gotten serious. And not a moment too soon!

What did they announce?
You can read the press release here.

Key quote:

3D XPoint technology combines the performance, density, power, non-volatility and cost advantages of all available memory technologies on the market today. The technology is up to 1,000 times faster and has up to 1,000 times greater endurance3 than NAND, and is 10 times denser than conventional memory.

And it’s byte-addressable.

Courtesy Intel/Micron

Courtesy Intel/Micron

It looks like they’re productizing the Crossbar RRAM. I’ve written about Crossbar’s technology on StorageMojo and ZDNet.

According to Crossbar their memory cell uses a metallic nano-filament in a non-conductive layer that can built on current CMOS fabs. Since Crossbar’s business model is to license their technology, as ARM does, Intel/Micron could use their technology.

Even Intel’s name “3D XPoint” – pronounced “3D crosspoint” – sounds like Crossbar’s nomenclature.

Other Crossbar stats:

  • They’ve fabricated cells down to 8nm feature sizes.
  • 20x faster writes than flash.
  • 10 year retention at 125F.
  • Up to 1TB capacity per chip.

Clearly, I/M did not echo these stats, which gives me pause. But hey, I/M has smart guys too, so enhancing a licensed technology is likely.
Update: It is now clear that Intel/Micron are rebranding the Numonyx phase change memory, not using Crossbar technology. Sorry! End update.

The press release claims that the new memory is in production. But where?

3D Xpoint™ technology wafers are currently running in production lines at Intel Micron Flash Technologies fab

So they haven’t cranked up a $5B fab to produce this yet. Current production is for sampling later this year with no date for products based on the technology.

The StorageMojo take
Whether this is Crossbar’s technology or not, this is great news for the storage industry. NAND flash has notable deficits as a storage technology, and 3D XPoint addresses those.

But one advantage NAND flash has – and will retain for the foreseeable future – is cost. While the Crossbar technology offers a small feature size – one of flash’s deficits – and potentially a lower cost per bit than flash, it will take years for those advantages to be reflected in device costs.

Nor can 3D XPoint expect to replace DRAM. It’s faster than NAND, but only devices that don’t need DRAM performance will be candidates for 3D XPoint.

But this announcement reinforces the need for the industry to fix the outdated, disk-oriented, software stack that is holding back I/O performance. The Intel/Micron announcement should focus architects on this vital issue.

Courteous comments welcome, of course.


The storage tipping point

June 29, 2015

Storage is at a tipping point: much of the existing investment in the software stack will be obsolete within two years. This will be the biggest change in storage since the invention of the disk drive by IBM in 1956. This is not to deprecate the other seismic forces of flash, object storage, cloud and […]

7 comments Read the full article →

Hike blogging: the Twin Buttes loop

June 21, 2015

Summer finally arrived in Northern Arizona, about 6 weeks later than usual. Good news: no wildfires, thanks to lots of rain. Bad news: I was freezing! I got out to the Twin Buttes before 7am – and was a little late. Shade is a valuable commodity in the Arizona summer! In another 10 days or […]

0 comments Read the full article →

Can NetApp be saved?

June 17, 2015

If NetApp is going to save itself – see How doomed is NetApp? – it needs to change the way it’s doing business and how it thinks about its customers. Or it can continue as it is and accelerate into oblivion. NetApp’s problem NetApp is essentially a single-product line company, and that product line is […]

7 comments Read the full article →

Why it’s hard to meet SLAs with SSDs

June 3, 2015

From their earliest days, people have reported that SSDs were not providing the performance they expected. As SSDs age, for instance, they get slower. But how much slower? And why? A common use of SSDs is for servers hosting virtual machines. The aggregated VMs create the I/O blender effect, which SSDs handle a lot better […]

8 comments Read the full article →

Make Hadoop the world’s largest iSCSI target

June 1, 2015

Scale out storage and Hadoop are a great duo for working with masses of data. Wouldn’t it be nice if it could also be used for more mundane storage tasks, like block storage? Well, it can. Some Silicon Valley engineers have produced a software front end for Hadoop that adds an iSCSI interface. The team […]

2 comments Read the full article →

Hospital ship Haven in Nagasaki, Japan, 1945

May 25, 2015

StorageMojo is republishing this post to mark this Memorial Day, 2015. In a few months we will be marking the 70th anniversary of the end of World War two as well. My father was a career Navy officer and this is a small part of his legacy. See the original post for the comments, many […]

0 comments Read the full article →

No-budget marketing for small companies

May 13, 2015

You are a small tech company. You have a marketing guy but it’s largely engineers solving problems that most people don’t even know exist. How do you get attention and respect at a low cost? Content marketing. When most people think about marketing, they think t-shirts, tradeshows, advertising, telephone calls, white papers and brochures. Those […]

0 comments Read the full article →

Hike blogging: Sunday May 10 on Brins Mesa

May 11, 2015

The Soldiers Pass, Brins Mesa, Mormon Canyon loop is my favorite hike. It has about 1500 feet of vertical up to over 5000 ft and the combination of two canyons and the mesa means the scenery is ever changing. This shot is taken looking north from the mesa to Wilson Mt. It was a beautiful […]

0 comments Read the full article →

FAST ’15: StorageMojo’s Best Paper

May 11, 2015

The crack StorageMojo analyst team has finally named a StorageMojo FAST 15 Best Paper. It was tough to get agreement this year because of the many excellent contenders. Here’s a rundown of the most interesting before a more detailed explication of the winner. CalvinFS: Consistent WAN Replication and Scalable Metadata Management for Distributed File Systems […]

5 comments Read the full article →