6 weeks ago StorageMojo covered the leaving-stealth-mode non-announce of Atrato’s new storage box. I spoke to Dan McCormick, Atrato’s co-founder and CEO a few days ago for an update.
They’ll have more details at SNW. But here’s what I found interesting.
Density and capacity
The new Atrato box is 3U, not 5, and has about 200 2.5″ drives, for 50 TB raw. With the new 500 GB 2.5s coming out they’ll be able to do 100 TB.
That blows away the density of EMC’s soon-to-be-announced Hulk box. And with the declining delta between 3.5″ and 2.5″ drive capacities, the Atrato box should increase their capacity per rack unit lead.
Performance
In a refreshing change from normal industry practice Atrato quotes IOPS to disk, not cache. Thus their quoted 10,000 IOPS is a real-life number. Dan said that one user got up to 20,000 IOPS after tuning their app.
Apps with big files and large I/Os need disk I/Os, not cache I/Os. Most controllers turn off cache when they see large I/Os anyway. Quoting cache IOPS to their market would be a mistake.
Power
Atrato claims an 80% reduction in power per I/O. 80% of that is due to the power efficiency of 2.5″ drives. The remaining third though is their own special sauce.
Virtual drive hospital
When a drive starts acting up – and with 200 drives that doesn’t take very long – their software “pulls” the drive and tests it. If the drive is failing they leave it alone, but Atrato has found that over half the problem drives can be put back into service.
The StorageMojo take
Still cool. An interesting metric will be uptake into space and power constrained enterprise data centers. If power really is an issue – and while I’m sure it is at some level, the priority is the question – I’d expect to see all the big NYC data centers testing these things within 90 days.
Comments welcome, of course. Dan also commented that StorageMojo’s original Atrato post was the best researched and most insightful of all the reportage they saw. Flattery works.
You didn’t comment on the coolest aspect of these appliances and similar ones which is the self-healing technology.
http://infiniteadmin.com/atrato-velocity-v1000-no-maintenance-self-managing-array-of-identical-disks-said
Wow. I am impressed. Very innovative designe. With no doubt this is the product with the highest TB/rack unit out there. They seem to have a very high IOPS/watt ratio too.
Now all that remains to be seen is their prices…
After looking through most of their documentation it seems that there is only 160 drives in a box. To get to the 50TB raw figure, they seem to be using the 320GB drives (160 x 51.2TB).
It’s a pity that there doesn’t seem to be anything on their site about the connectivity options, processor redundancies, replication or clustering. If they provided a way to create a cloud of these they would probably be on the top of my solution list for permanent near-line archiving of about 60TB of data.
Still very impressive tech though. I just hope they get below the $7 / GB price ($140k for a 20TB) system I saw at http://www.theregister.co.uk/2008/03/26/atrato_v1000/
Robin,
How is this sealed when there are vent holes in the front panel ?
Does “sealed†in this case means “vertically mountedâ€, i.e. not removable from the front?
Why seal them ?
It is obvious that one can easily package 25 x 2.5 inch disks, mounted horizontally, across the 17.2 inch front panel opening. Multiply this by 4 deep and you have 100 disks. I think all RAID manufacturers can do this…. in a 3 U height, with some vertical space to spare. Reason they don’t … is the high cost of 2.5 inch disks. Lower power consumption comes in for free.
One possible reason for “sealing†is that the drives are ‘bolted’ to an aluminum cast-type of infrastructure, conduction cooled by a large, separate air cooled heatsink.. This would explain the need for front panel perforations, and the real reason for mechanical “sealingâ€â€¦i.e you can’t remove bolted-in disks.
So … they have eliminated dust and equalized the temperature across all disk with extra complications and cost … how important is this ? I hope that the US Patent office has improved in their approach to useless / trivial patents as such schemes have been in use for some time in rugged equipment.
They use standard drives…nothing special here. I believe that most of the RAID people know how to deal with block-level repairs, spares pool & otherwise suspect drives. So let us see where the magic is…silence..
Given enough drives, all kind of IOs per second specs are possible…. but at what cost.
Everyone is waiting for 2.5 inch drives to come down in price and all of this should happen without patents or tricks…. I don’t see a point… or is this just me ?
Robin,
One more issue… lets see how they deal with sequential bandwidth available from this 200 disk configuration and how they approach HA issues… suspect more silence.
If the V1000 only (!) has 160 drives then that makes it difficult to reconcile with the reported hundreds of drives in the thing. If so it means it’s the same miniunit capacity (20 drives) as my take on Xiotech’s coming ISE with its sealed units holding 20 drives. I believe, for now, the ISE uses 3.5-inch drives so it could have 20TB raw capacity per sealed ISE and 40TB per rack shelf box. If the Atrato box is taken up then I think we’ll see a flood of 2.5-inch array products coming.
Semi-off-topic post; I apologize in advance.
Robin, I’m tired of reading about the same soon-to-be-announced vaporware in almost every StorageMojo post. Given that some of us lowly customers haven’t been briefed under NDA about this wonder-storage, your constant mentions sound like FUD. StorageMojo is a great blog but this one aspect annoys me.
Wayne, they told me 200 but I haven’t looked at the doc set. Could there be 40 spare drives? That would be 1 a month for 3 years. . . .
Richard, I think the “sealed” means “no user replaceable parts”. I agree that anyone could do this – but no one else has. The cost per GB issue is real, which is why I like the “no maintenance” 3 year life. Typically 3 years of maintenance adds 20-40% on the base price, so it lessens the perceived price differential.
Chris, Atrato has to have significantly higher density than 3.5″ drives or they’ve failed. I did the GB per cc once and IIRC it was about 7x. How can they not do better? Nice analysis on the Xiotech box BTW. If you’re correct it looks like they’ve missed the mark.
Wes, I don’t sign NDAs with non-clients. Maybe I shouldn’t write about things a week after everyone else but get them out ASAP. True, I have written about Cleversafe, Isilon, TMS and now Atrato again this month, so you may have a point. I’ll see what I can do.
Robin
Chris,
Again, I fail to see any hint of a ‘great’ invention in Xiotech ISE…. but I have not looked into their patents.
Reading your article, it says ” …. reliable array built with components containing 20 hard drives and dual controllers. These are Integrated Storage Elements (ISE) and they are virtualized into a single and self-healing storage pool.”
This is a standard, very basic way of doing HA RAID … i.e. dual coherent controllers, a bunch of disks … some of which are global spares, virualized into multiple ‘raidsets’ , which in turn present multiple ‘error free’ LUNS.
If required, these LUNs can be striped or ‘virtualized’ once again into RAID- protected volumes….one box or many boxes….
Pretty much standard stuff… for a lot of money… no ?
Robin,
I am pleased to see that you agree that anyone could do this… but look at the money and energy being spent on marketing hype and (probably) trivial patents.
I hope there is something new in some of this.
I read through their entire whitepaper on storage and they only mention 160 drives once or twice. It is possible they’ve managed to squeeze more in there since they’ve written it though.
From what I’ve read, the spare drive allowance seems to be variable. They quote somewhere between 10-15% will be required over 5 years (although they use 3 pretty much everywhere in their report) which meets the stats I’ve seen quoted from that Google report and the other one (which I can’t recall right now) which basically say that failures start at 1% and increase by 1% every year on drives (1 + 2 + 3 + 4 + 5 = 15%).
Their RAID and predictive health software is what really impresses me about Atrato. They can do RAID10 (and a 3-way version), RAID50 and RAID60. With RAID10, they can preemtively rebuild a drive if they think one of them is about to fail and then pull & test it. They constantly (at a *background* rate) test all sectors on all their hot spares and they can reduce performance (and therefore power usage and heat generation) in environmental situations that would affect the lifetime of the device.
Please let me introduce myself. My name is Steve Visconti and I am the EVP of Atrato, Inc.
Today I wanted to address some of the questions on Robin’s blog with respect to the Atrato Velocity 1000 product line. The Atrato V1000 is a self- maintained array of identical disks (SAID) purpose-built for access and performance density. On average the V1000 will exceed 11,000 IOPS. I say on average because as most of you know performance numbers vary based on initiators, file systems, whether the file system utilizes cache, RAM configuration, and buffer schemes. In a specific configuration based on Windows Server 2003 Enterprise Edition with four 4G fiber channel links the V1000 will sustain well in excess of 11,000 IOPS in 3U. (The system also utilizes a 2U controller which can be interconnected for redundancy.) If the application is video streaming then the V1000 will support over 3000 standard definition streams or 1.2GB/s streaming capability. Having mentioned fiber channel, the V1000 system is a multiprotocol architecture today supporting 4Gig FC SAN and NFS NAS. Over the next few months Atrato will be announcing additional connectivity support. Pricing on the Atrato V1000 system varies based on customer configurations. Options for higher capacity or screaming performance will affect the configured price. The SAID utilizes 2.5 inch SATA drives which come in various RPM and storage capacity. The enclosure itself is configurable from 96 drives to 160 drives and higher densities to come. As you would expect the Atrato Virtualization Software allows for access of hundreds of drives across multiple arrays.
I appreciate the forum here and look forward to future dialog.
Steve,
Given the background of some of your technical people, I am sure that Atrato can tackle all of the required design tasks. However, firmware takes time to write and to become field-proven … and some issues are difficult to solve.
Additional answers would be helpful.
1. Given the number of disks and the four FC channels, the IOPS figures are believable.
However, it would be interesting to know what size of data payload is used for this test, separate numbers for reads and writes… if this is across one or two controllers … and the number of disks and LUNs per FC initiator.
2. It seems that the 3U system contains only one controller. Additional 2U enclosure is required for the second controller and presumably you do support multi-ported LUNs.
How is this connected for redundancy, power outages and how do you manage multi-ported LUN coherency…i.e. should the initiators span different asynchronous data sources (servers) ?
3. The 1.2 G Byte streaming figure…presumably this is under Raid 6 in a rebuild mode.
If not, then which Raid level was used and in what condition… i.e. degraded ?
Is this through a single controller and across the four FC channels ?
4. Your stated performance figures presumably hold or improve across coherent, dual-controller HA configuration ?
Thanks,
Richard
After a quick search, I was not able to find any patent applications for Atrato or Sherwood Information Serices. Why?
I have a question as to why the IOs are so low on the Arato box. Maybe the disks are slow or low quality??? The types of disks generally used in most sans are usually in the range of 300 IOPs per spindle on the high end (Fibre Channel), and 90 IOPs per spindle on the low end(SATA). So the simple math says if you take the number of spindles in a system and multiply it by the number of IOPS each drive can do, you get the total IOPS. 11,000 IOPS for 100 drives seems pretty slow to me. Check the Xiotech SPC benchmark numbers with only 20 or 40 drives in a system….storageperformance.org
Also, what about heat? That’s the reason why no one has done this before. Can you imagine the heat put out in 3U with 100 drives? The fans have got to be turbine engines to keep the drives cool. Vibration and heat are drive killers.
The patents on the Atrato unit are 7,280,353 and 7,167,359, they are under Sherwood Information…
Sadly, these patents are a very good example of what is wrong with the US Patent Office.
Kelly said,
on April 9th, 2008 at 5:28 pm
I have a question as to why the IOs are so low on the Arato box. Maybe the disks are slow or low quality??? The types of disks generally used in most sans are usually in the range of 300 IOPs per spindle on the high end (Fibre Channel),
Kelly,
I wonder what disks you had in mind when you said “usually in the range of 300 IOPs per spindle”.
I am not aware of any disk with the average seek time of 1.3ms. Therefore, I doubt that 300 IOPs are achievable with a random workload.
A while back, one of my colleagues asked me to come take a look at his Atrato system in test-mode. I have not yet read the white papers, but the claim of 11,000 IOPS seems to be exaggerated based on performance tests described to me and there were some recurring issues involving data corruption. Are there any proven and stable deployments that can be referenced? The concept is interesting, but I have yet to really see the system in action as it was awaiting support and (possibly) rebuild when I stopped by to take a look at it out here…
Based on the lack of response, I guess there are no answers to my questions. Evidently, the inability to provide answers extend beyond this forum as my colleague complains about plenty of contact from sales, but little to none from support… I guess that is the answer to my question. I keep checking the Atrato website for deployment announcements, a.k.a. references, but see absolutely none listed. I’ve spent enough time looking into this, time to move on to something else.
Chris mentions a colleague is in test-mode but has data corruption issues and no support from Atrato. We use 128-bit digests which gives us a BER combined with existing 10^-15 of the drives. A 128 bit digest applied at the 512 byte LBA and 4K block level provides an absurdly high detection rate on real data corruption. If the colleague has support issues he should contact Atrato directly. Atrato provides a 7 x 24 hotline support line. All customers and prospective customers evaluating the Velocity 1000 have our toll free number. The Atrato V1000 has consistently achieved 11,000 IOPS in a variety of configurations. Atrato is willing to demonstrate to anyone interested. The Atrato Velocity 1000 is the highest performing storage array on the market rack unit to rack unit or full rack to full rack. We focus on high throughput applications including VOD, IPTV and applications which require high IOPS where the application’s nearly random storage requests require a high number of disk actuators. White papers and customer testimonials are posted on our site http://www.Atrato.com.
I appreciate the response. I re-read my post, that should have read VLUN corruption, not data corruption. Rebuilding a corrupted VLUN is detrimental to maintaining data integrity. The technology seems impressive, but how is the stability? I heard some talk a few weeks ago about a possible add-on to help with that aspect.
Disclaimer: My information is older than my post, I did not find this forum for a while and decided to toss a few questions up. I’ll try to get an update from my colleague if possible.
To date Atrato has not encountered any read VLUN corruption issues with our customers. If your colleague would like assistance for your previously noted issue please have them contact Atrato directly at 720.536.4000. The stability of the product is outstanding. We are comfortable with our 3 year Zero Touch statements. In fact with over 3 million hours of empirical data collected, the V-1000 will exceed our claims. I would like to see all storage systems move toward no touch maintenance as it will be a great relief to IT directors.