Remember when Amazon Web Services (AWS) announced Glacier, a data archiving service, almost 2 years ago? Long-term, slow-retrieval (3-5 hours) storage for 1¢/GB while maintaining several copies across geographies.

Pretty amazing. Less amazing now that disk prices are reaching 3¢/GB, but there’s still power, cooling, mounting and replacement costs to consider in addition to multiple copies.

Tape? Amazon denied that. Plus the long-term storage requirements for tape require a level of climate control that their data centers may not support.

Not tape.

Hard drives to the rescue?
That left disk. Perhaps Shingled Magnetic Recording (SMR) drives that, in theory, could double existing drive density at the cost of expensive rewrites. Which an archive wouldn’t have.

Seagate announced they’d sold a million SMR drives – and not through NewEgg. WD is getting on board with SMR as well.

But as Rick Branson, an Instagram engineer, suggested in a tweet:

Economics of AMZN Glacier: 3TB drives are about $0.003/mo/GB racked and powered + erasure encoding = thin, but survivable margins.

That’s ≈$108/drive/year. Since 3TB drives cost about that, it’s clear that over a 3-5 year life the rack, power and redundancy cost is 2/3rds to 4/5ths of total cost. At Glacier’s $10TB/mo in 2012 – and today – and a 2012 cost in excess of today’s $108, you wouldn’t need Bezos’ financial acumen to see a non-starter. SMR could double margins, but in 2012 – remember the Thai floods – even Amazon couldn’t make this pencil.

Not disks. Even SMR disks.

The plot thickens
But if Glacier’s data was stored on disks – even spun down disks – why the tape-like 3-5 hour retrieval delay? Fake delay to make sure only archive data is stored? Disk drive robots?

Disks are sensitive – never mind the specs – to physical handling. I’ve never seen an HDD handling robot or the Zero-Insertion Force drive connector that would be required to minimize physical shock.

One more thing: tape libraries – the obvious robotic starting point – are designed to handle 200 gram tapes, not 600+ gram 3.5″ HDDs.

Not disk robots.

Lightning strikes
It was a couple of blog post from AWS architect and all-around nice guy James Hamilton that cleared things up.

James wrote Glacier: Engineering for Cold Data Storage in the Cloud at the time of the announcement. The post carefully avoids discussing the underlying storage, but in the comments James says

Many of us would love to get into more detail on the underlying storage technology.

Almost 2 years later, no one has.

Timing is everything
In June 2010 the BD-R 3.0 Spec (BDXL) defined a multi-layered disc recordable in BDAV format capable of 100/128 GB. In July 2010 Sharp announced 3 layer 100GB triple layer players and recorders.

Two years later, in August 2012, Amazon announced Glacier. Two years is about the time it would take to develop a custom optical disc mass storage system, test it, and announce the service.

Despite the obvious lack of consumer uptake, development continues on high capacity optical. Somebody is buying these things in volume. Unlike commercial Blu-ray discs – which are stamped, not written – writable optical requires chemistry.

Figure 10,000 3 layer discs per petabyte, the number of petabytes that AWS, FB and others are putting into cold storage, and that’s millions of discs per year. Pure OEM revenue with very low sales, marketing and support costs, and regular massive orders delivered every month by the semi-load.

Another piece of the puzzle
In February James wrote another post Optical Archival Storage Technology.

He starts with an important comment about today’s market:

It’s an unusual time in our industry where many of the most interesting server, storage, and networking advancements aren’t advertised, don’t have a sales team, don’t have price lists, and actually are often never even mentioned in public. The largest cloud providers build their own hardware designs and, since the equipment is not for sale, it’s typically not discussed publically.

Then he starts discussing the growth of cold data and what FB will be showing at OCP Summit V:

This Facebook hardware project is particularly interesting in that it’s based upon an optical media rather than tape. . . . [T]hey are leveraging the high volume Blu-ray disk market with the volume economics driven by consumer media applications. Expect to see over a Petabyte of Blu-ray disks supplied by a Japanese media manufacturer housed in a rack built by a robotic systems supplier.

I’m sure his friends at FB previewed the preso, but the lack of surprise or affect at the viability of 10,000 Blu-ray discs in a rack is telling: this is the discussion about Glacier he’d like to have. More telling: “the volume economics driven by consumer media applications” as if BD-R and BDXL were a great success. Which they are, but only at Glacier.

Media cost?
The biggest objection to mass optical storage is media cost. While 100 piece online BD-R 25GB media ranges from 46¢ on up – 2¢/GB, triple-layer BDXL media quantity 1 starts at ≈$25 or 25¢/GB. How does THAT pencil?

Disc production costs are mostly fixed. Once you set up a line the variable cost of plastic, chemicals and test are less than $1/disc. If the line is properly sized for expected demand, it can run 24/7, and the learning curve will drive prices even lower.

Assuming aggressive forward pricing by Panasonic or TDK, Amazon probably paid no more than $5/disc or 5¢/GB in 2012. Written once, placed in a cartridge, barcoded and stored on a shelf, the $50 media cost less than a hard drive – Blu-ray writers are cheap – Amazon would recoup variable costs in the first year and after that mostly profit.

The StorageMojo take
Therefore, by a process of elimination, Glacier must be using optical disks. Not just any optical discs, but 3 layer Blu-ray discs.

Not single discs either, but something like the otherwise inexplicable Panasonic 12 disc cartridge shown at this year’s Creative Storage conference. That’s 1.2TB in a small, stable cartridge with RAID so a disc can fail and the data can still be read. And since the discs weigh ≈16 grams, 12 weigh 192g.

For several years I didn’t see how optical disk technology could survive without consumer support. But its use by major cloud services explains its continued existence.

Courteous comments welcome, of course. This analysis is inspired by one of my favorite books, Most Secret War, the great story of British Scientific Intelligence from 1939 to 1949 told by its young physicist director, R. V. Jones. Competitive analysis with life and death stakes.

Update: In the just-added link to the Sony-Panasonic press release above, Sony closes by saying:

In recent years, there has been an increasing need for archive capabilities, . . . from cloud data centers that handle increasingly large volumes of data following the evolution in network services.

Gosh, whose cloud data centers could they have in mind? End update.

Update 2: Best tweet on the topic comes from Don MacAskill of Smug Mug:

@StorageMojo FWIW, this contradicts what I’ve heard from ex-AWS employees. Their explanation sounded crazier than yours, though. 🙂