IBM and Fujifilm have demonstrated a technology that, if productized, could give us a 70 TB LTO tape cartridge. Tape isn’t dead – that will be a long time coming – but its vital signs aren’t good, either.
Vacuum column, 800bpi tape drives
Magnetic tape is the oldest digital storage technology still in use. Once mass storage meant tape because drums – and later, disks – were tiny and absurdly expensive.
IBM and Fujifilm demonstrated a density of 29.5 billion bits per square inch on linear tape. Disks are approaching 1 T/bit in a controlled environment and much less media area.
Theoretically this supports a single tape cartridge with a 35 TB of uncompressed data capacity – or 70 TB of compressed data in a single LTO (linear tape open) cartridge.
Current LTO tapes, even with compression, are at about 2 TB per cartridge — the same as high-end disk drives. In nine months those 2 TB disks will cost about the same as single LTO cartridge. Why store data on tape where it is so much slower to access?
Defenders point to tape’s energy efficiency — write once and shelve without consuming more energy for decades — but people like the convenience of random-access data. If this drive industry woke up and started offering archive quality disks — Seagate sold an automotive hard drive that carried a 10 year warranty — much of the remaining tape market would disappear.
Lifespan is another benefit of tape technology. I recently transferred a 20-year-old VHS tape that hadn’t been looked at in at least 10 years to my computer. There was some drop out but the picture was very watchable. Try that with a 20 year old disk drive.
Technology
Whether it is commercially feasible or not, the IBM/Fuji technology is impressive:
- Advanced nano particle technology — they limited the size of the barium ferrite particles to 1600 nm3 — approximately 1/3 of current metal particle volume.
- Advanced nano coating technology — a smooth and thin magnetic layer with very low variability reduced signal fluctuation significantly, enabling more accurate signal processing.
- Advanced nano dispersion — a new material controlled agglomeration enabling more uniform dispersion of the nano particles.
- Nano perpendicular orientation — taking advantage of the barium ferrite particles crystal magnetic anisotropy, a perpendicular orientation improved high-frequency characteristics.
But the remaining obstacles are daunting: mass production of tiny uniform nanoscale particles; mass production of an extremely smooth and thin magnetic layer; and careful control of the particle dispersion and orientation. Plus heads and transports accurate enough to take advantage of the density.
That added technology raises tape’s entry price – further restricting the market – and it isn’t easy to see what, if anything, can reverse that dynamic.
The StorageMojo take
Regardless of whether you think tape has a long-term future, this is an impressive demonstration. When I introduced DLT at DEC, customers were thrilled to get to 2.6 GB on a tape cartridge.
If they can get the cartridge to market in the next 5 years, they’ll can charge 5x what a disk costs – because the capacity is so much higher than any single disk. If they can’t – well, it was a neat tech demo.
Drive marketers should see that a massive archive disk market is fast approaching. Cheap USB 3 SATA drive docks will enable millions to store their memories on rarely used disks – and to rapidly access all the data.
Nevertheless, tape remains the most proven archival storage medium for digital data. Tape may yet live to see that 70 TB cartridge delivered.
Courteous comments welcome, of course. I had an audio cassette recorder for storage on my first computer. Couldn’t afford $800 for a 144 KB floppy disk. I now have 11 disks – and 2 optical drives – on my Mac Pro. That cassette recorder was my 1st – and last – tape drive.
“Defenders point to tape’s energy efficiency – write once and shelve without consuming more energy for decades.”
Decades? The biggest issue with tape as a high-volume archive has been drive obsolescence as technology marches forward. Any app requiring access to offline media more than 5-7 years old needs a good media refresh policy. Having data and accessing it are two entirely different beasts, but I’d love to see the verify and restore times on those whoppers.
I think using highly-efficient, deduped disk technologies like NetApp and/or Greenbytes will win in the long run.
“I recently transferred a 20-year-old VHS tape that hadn’t been looked at in at least 10 years to my computer. There was some drop out but the picture was very watchable. Try that with a 20 year old disk drive.”
Try that with a 20 year old digital tape (rather than analog tape).
Tape won’t go away, if for no other reason than because it’s the most inexpensive method for archival and every business concerned about archival is built around this premise.
Reading the tape 10 years down the road may be an issue for run of the mill companies, but defense contractors and medical device manufacturers which are required to keep their data for long periods of time retain equipment for such eventualities.
Dale, strangely enough tape media with 7+ years old are read daily in my area. Without data lost in 99.9% cases.
If you have deal with low-end tape solution I can understand your frustration with tape. Today’s tape technologies are far more advance than disks on reliability aspect. That was not always the case just 10 years ago.
Most old open system backup software are still available like old servers on ebay. They also use TAR or CPIO format in most cases. So getting data out of them is kind of easy in case of problem.
On Mainframe it is more simple with very few tape format around. It is not rare to see 20+ years old media on them. I just decommission this week few 20 years old StorageTek 9310 robots with several thousand medias. These medias are still readable by old 36 tracks drives (400GB to 1.6GB each) still under support contract by Sun.
I would like to see your data migration path when you have PB on dedup with any box…
On tape we simply leave medias on the shelf and keep a two or three tape drives around for potential restore (if any). Along with backup software of course. Who need 2+ years old data…very very few. Some gouv. agencies it is for centuries data archive. Certainly not on disks.
In your case you will have to move all data out before decommissioning your boxes. Not easy tasks when you need to keep data around for long time. Life expectation of disk today is less than 2 years at the pace they are EOL them. Just try to have new Enterprise class 250 or 750GB SATA drives today in your Netapp.
I like dedupe technologies for backups; shoot, even rsync with hard links is full of win for doing backups of mostly-unchanged data. However, I see companies presenting deduplicated online data as a “backup solution” and I don’t get it. What happens when a multi-drive failure kills your backup device? Instead of losing one round of backups, you’ve lost all of them. You’ve also got to contend with drive/controller firmware bugs, malicious insiders and outsiders, and human error wiping out your entire backup history in one shot, versus one tape at a time.
It’s possible that tape is at the end of its run, but offline storage is not. If drives get cheap enough, maybe someone will make a standard interface and an autoloader that uses drives.
Tape as a medium will be saved IF the industry redefines itself. Personally I find myself struggling to find a place for tape; however as new compliance regulations like SOX and PCI take hold of IT, I believe tape will be used in journal and archival roles if Blueray or other technologies don’t step up to the plate.
True Costing.
The true cost of any storage system is inclusive of the OPEX of running it. Tape can be quiet high, leaving the owners of archival systems pining for something better. Which is not to say that there isn’t impedance in finding the “something obviously better”. There is such impedance. But still, one wants.
Joe Kraska
San Diego CA
USA
Since tape and disk platters have historically shared media technology developments, wouldn’t we be looking at similar jumps in disk drive density as well? Not that a 35TB disk drive is exactly a good thing, considering the backup headache that even 1TB drives implies.
Tape is here to stay and not going anywhere. It’s still the most cost effective archiving platform on the planet today. Organizations are never going to spend insane amounts on their budgets for expensive disk clusters (or even optical libraries) to archive fixed content that’s rarely accessed. The economics simply don’t make sense. Every storage platform has it’s place in the storage world. The important undertaking is to really understand what the true storage requirements is. Tape, optical and disk all have their place.
Robin,
I’m not sure where you’re getting these numbers from but disk will never and I mean never be as cheap as tape. It all boils down to costing (from manifacturing to logistics and supply chain management). The cost to research and advance tape technology is a fraction of what it takes to advance all other platforms (disk and optical included) and spinning disks does have a shelf life. All the so called pitfalls of tape mentioned also apply to disk and it’s a very valid arguement to make that isk is still the most unreliable of the three major platforms (Disk, Optical and Tape). A simple power surge or failure of one disk in an array can render your entire cluster useless. A file system upgrade, firmware or even a miscalculation at the software level can do the same to the entire array. Those who still argue that disk is a long term archival medium continue to astound me. The claims that you have to migrate media when you use tape is simply not true. This only applies when there is no clearly defined offline cataloging and archive management policy in place. When dealing with archives, the technology platform should not even be discussed without a full discovery and cataloging of whats to be archived. If you or your client have to migrate every few years then your storage management administrator should be fired or you should be fired as a consultant / solution provider.
Furthermore, most environments with fixed content that needs to be archived are looking to spend the mininam amount of dollars on data thats not critical to their primary operation. This is a very legitimate arguement and as a matter of fact I recommend to most clients. Why spend $2 million on data that brings in less than 500K? The cost to archive fixed content should never be more than 55 – 70% of what it costs to own and maintain that data. From a busines sperspective most people who tout NetApp and other disk manufacturers know better than to go down that road. It’s simply a positioning too because even NetApp refreshes and runs EOL (End of Life) processes on their products every few years and what happens? Well the end customer who bought those disk arrays to “archive” gets the phone call from the rep recommending an “upgrade”…….Vendor lock – in is a conversation for another time.
Alani Kuye