Quantum announced a new deduplication appliance series – the DXi 6701 and 6702 – that claims exceptional scalability. Why? Because it uses technology from Quantum’s StorNext cluster file system.
Scale out
Quantum says the units grow from 8 to 80TB of usable RAID 6 capacity with no subtractions for landing areas, hidden reserves or multiplication. And they say they’re fast: 5.8TB/hr using VTL or OST; 5TB/hr for NAS; all dedup at wire speed.
The only difference between the 2 models is that one has 1Gig Ethernet and the other 10Gig. All the software is included in the price: NAS, OST, VTL, tape support, replication and client side dedup option.
List prices start at $56k. Quantum sells through the channel, so that’s a maximum.
The StorageMojo take
The DXi 670x is a good example of the power and economy of scale-out vs scale-up. The cluster file system technology underlying it enables the 10x capacity expansion with high performance and low-costs.
With a scale-up approach hardware volumes would be lower with hardware and software qual and support costs higher. That Quantum owns the underlying technology makes their job that much easier.
Quantum’s market power is a fraction of EMC’s Data Domain. But the power of their architecture’s advantages in performance, flexibility and cost point to a larger trend.
The problem with Quantum’s marketing is that they only play the price/performance card. Important, no doubt, but by ignoring the fundamental advantages of their scale-out architecture, they let the competition sidestep their long-term problem: they don’t scale.
Winning in the development lab is only part of the battle. Helping customers appreciate – and making competitors react to – the differences, is needed to win in the market.
Courteous comments welcome, of course.
Storage Mojo, wouldn’t it more make more sense to scale with cheap commodity disks and a solid backup software provider? I find CVLT interesting. More of a data management platform that happens to be very good at de-dup. Since they also have a backup agent residing in the OS layer; that enables the backup vendor (CVLT in my example) to offer source and target based deduplication on a global basis. Starting at the client, the amount of redundant backup data stored on disk or tape can be eliminated. This reduces the amount of data traveling over your network. If you use any 6GB-SAS DAS Storage model of choice this would scale to 96 Disks (308TB Usable).
I have to agree with JJ.
5years ago, de-dup required a dedicated appliance with custom hardware to achieve wire-speed and decent de-dup results.
With the latest Intel Xeons including new instructions to accelerate SHA1/SHA256….
http://www.sisoftware.net/?d=qa&f=cpu_intel_sb
With a beefy enough server, software only de-dup should be able to achieve wire-speed performance with same order of magnitude de-dup.
If the TCO is lower (this is the big IF*) the software solution wins.
* in the case of CVLT, their pricing/licensing model needs to change more to provide the flexibility needed to scale out and to win the TCO battle.
An addendum to the above comment, didn’t mean to pick on CVLT, they are by far the closest to delivering a compelling software based de-dup solution that can compete against the appliance based solutions.