I’m in Silicon Valley for a few days. So I’ll keep this brief.
EMC is pulling out the stops. First Hulk/Maui clusters and now putting flash SSDs in the Symm. They are positioning it as technology leadership, which it isn’t, but it is marketing leadership. I’m impressed.
SSDs have been around for decades. Symms have been around for over 15 years, so why now?
I suspect the rising chorus of customers complaining about 30% capacity utilization rates coupled with Wall Street’s economic woes – I wouldn’t want to be EMC’s Citibank account manager – helped them make the decision. Plus the rise of cluster block storage – XIV the latest case in point – means that if you want to own the high-performance array crown it is time to stake out the territory.
Plus the margins are great!
I haven’t seen any pricing yet, but knowing EMCs general strategy I suspect they are charging their usual 6x markup over cost for the SSDs. Despite that it should be an easy business case for a CFO to approve.
But if you are going to spend big bucks on an SSD, is putting it inside a single storage array the right way to go? The wide-awake folks at Texas Memory Systems think not. They provided me with this table comparing their SSD to the STEC ZeusIOPS drive EMC is using.
|Sustained random read IOPS||
|Sustained random write IOPS||
|Sustained sequential reads||
|Sustained sequential writes||
Make an entire SAN go faster instead of a single array? Sounds good to me.
The StorageMojo take
Will SSDs finally get some data center love? EMC’s endorsement of SSDs should provide an opening for the long-suffering SSD companies to get more attention from the enterprise. If it’s good enough for EMC . . . .
Comments welcome, as always. Moderation may be slightly more intermittent than usual, but moderate I shall. When I’m not enjoying the convertible I rented.
Are those numbers per disk or for an entire 9 Drive RAMSan-500? That isn’t a disk it’s a unit you put in a rack with RAID-5 across the disks and all that.
So I need two Zeus drives in a DMX to match the random IOPs performance of the RamSan, even given initial high costs of Zeus, I think I could make a safe bet as to which is cheaper. On the throughout issue, I don’t believe its going to be important to customers who will be using it primarily for random I/O where actual throughput rates are tiny.
We’re getting the mtron drives next week. The benchmarks should be off the hook.
I’ll join the confusion,,, is it a single drive vs an 8 port fibre array with 9 drives ? It doesn’t matter, RAM-SAN is running into the internal IO bottleneck, probably PCI-X internal bus or drives. IF you had 9 Zeus drives it should give you an aggregate bandwidth equal to RAM-SAN from those numbers… It sounds like a wash on performance, but I thought the Zeus only writes @100MB/s.
Enjoy the wheels
Solid state’s primary criticism I’ve heard was that it required strong “plumbing” for a server to actually take advantage of its speed. In the disk world, one of our major problems has always been being able to provide low latency small random reads for database type applications. Write cache has been our only answer for years, and I’m surprised that it’s taken over a decade for someone to put an SSD into a disk array.
Stephen Foskett: good catch. EMC claiming LESS than it could? After all 300 IOPS on an enterprise 15k drive – which I think a bit high, but OK – x 30 = 9,000 IOPS, about half of what STEC claims.
Maybe there are some issues with the drive in situ.
I’m very interested in the performance claims here, since EMC has traditionally been quite reluctant to make any performance claims about their products. But I guess when you introduce a super-expensive technology just for a performance boost, you kind of have to talk performance numbers! 🙂
I bet the STEC numbers are “best case” and the EMC numbers are “safe” – this would reflect the marketing position of EMC and explain the discrepancy. I’d love more information the TMS configuration…
Some of you may remember my post about these very SSD’s back in September last year. http://www.ibm.com/developerworks/blogs/page/storagevirtualization?entry=ssd_s_are_becoming_a
The 30x claim by EMC made me smile too. While I have proved through physical bechmarking of the STEC drives, they can sustain the claimed 50K and 19K figures when using 4K transfers. As you increase the transfer size, the IO/s decrease accordingly. BTW, the only controller I could use to actually drive these was SVC.
So the 30x statement, I read, to mean thats all that DMX can actually get from them. This more a statement of the FCAL capabilities at the backend of DMX than anything else. I guess its also maybe a statement about their pricing being 30x. I know the price STEC were talking when they talked with us, and also that EMC signed an exclusive deal with them for the FC variants of these drives.
These drives are 2Gig FCAL attach, so their 4Gig backend will be slowed down when you put these in, not really a problem for IO/s but will limit the MB/s claims.
What I find most interesting, and probably again the reason for the 30x (rather than the 150+x the drives are capable of) are that they are supporting up to 32 per DMX Quadrant. Now, we know that EMC don’t publicly benchmark their controllers, IBM does and so we have an idea of what a DS8300 can sustain. The DMX is likely to be in a similar ballpark, based on the number of drives they supprot etc etc. So don’t be expecting the drive performance from the array, its going to be limited by the physical subsystem and not the drives. This is a capacity play, and fair game. Its a reduce latency and much increased IO/s range, even if you can’t actually get the max out of the drive.
The drives themselves meet the claims made by STEC. They are sustainable in the right enclosure 😉
We would be happy to give you a full briefing on the RamSan-500 enterprise flash solid state disk and its performance characteristics. As you can see above, we provided some data to compare the RamSan to the STEC drive but you can only take so much from that in terms of comparing us to the EMC system. EMC has not published any real performance data so it is really hard to judge how well they are taking advantage of the STEC claimed performance. If you have looked at the EMC controller-cache-storage architecture you can see that this architecture will add latency to STEC drive accesses. You can also see that the EMC backend loops have an interesting effect on the actual deployment of the drives. So, all told we are left where we usually are with EMC and that is that they do not publish real performance data and we will have to determine their actual performance by tests that real world customers do comparing our products.
Woody Hutsell, EVP, Texas Memory Systems, 713-266-3200 x.241
Mr. Hutsell I would like to know more about RAMSam-500 spec and SRP prices too. If you have a competitive list with performance that would be great. Please email any informatio email@example.com or call me at 617-838-4040
SSD in an array is not new. Xiotech has been shipping this feature since mid 2006. Slides in next to the fibre channel disks. Great for transaction log activity.