DataSlide has come out of stealth mode with a very creative SSD replacement technology. They call it a Hard Rectangular Disk or HRD.
Here’s their quick overview:
DataSlide applies technology in new, patented ways to achieve unprecedented high performance 160,000 IOPS & 500MB/sec and low power <4 Watts for a magnetic storage device:
- A piezoelectric actuator keeps the rectangular media in precise motion
- A diamond solid lubricant coating protects the surfaces for years of worry free service
- A massively parallel 2D array of magnetic heads reads from or writes to up to 64 embedded
heads at a time
Here’s a diagram, courtesy DataSlide:
But that’s not all. According to the redoubtable Chris Mellor at The Register a
. . . 2-dimensional array of 64 read-write heads, operating in parallel, . . . positioned above an piezo-electric-driven oscillating rectangular recording surface. . . .
The data organization compared to a disk drive look like this:
courtesy DataSlide
Chris also reports that Oracle’s Embedded Global Business Unit is working with DataSlide to incorporate a database to create a “smart” storage device for use in I/O intensive “multiple concurrent stream” applications.
The company says the drive is at the prototype stage and uses existing high-volume production technologies, including perpendicular recording media, semicondutor lithographic heads and LCD glass treatments.
The StorageMojo take
DataSlide has taken much from IBM’s Millipede concept and reimagined it using common technologies. While much remains to be done to productize the prototype, the fact of such architectural creativity should spur new thinking at the hard drive companies.
Of course, just like SSDs, with such low latencies it doesn’t make much sense to stick the device at the end of a long, complex, high-latency interconnect chain. PCI-e HRD card, anyone?
Also, the relatively low capacity – 36GB – of the prototype device suggests it may slot in between larger capacity SSDs and DRAM. Until we know the economics though that is almost baseless speculation.
Let’s hope they can get it to market in less than 3 years. And let the based speculation begin!
Courteous comments welcome, of course. This post was updated from the original with the diagrams and some minor edits.
Site is down for me.
Being that this is still in the oven so to speak I’m guessing that their advantage over SSD may be size. Aren’t going to be looking at 500MB a sec from a single SSD (albeit high end) in the next few years?
Several years ago I saw a presentation about this (or something essentially similar) at CMU’s Parallel Data Lab. It’s very cool technology, but one thing about it really got me thinking. How do you do data layout and request scheduling on it? We’re all used to thinking about rotational latency and track-to-track time. Outer tracks fast, inner tracks slow, and all that. Now, how do you schedule your I/O when the motion is in X and Y instead, with two separate settle times and no wraparound in either dimension? I never really came up with much of an answer, but the questions gave me quite a few hours of puzzle-solving enjoyment.
Very cool technology. It’s a good sanity check on industry SSD hype. The storage horizon is constantly changing.
Cool tech. I’m not sure about slotting in between SSDs and HD’s, though – 36GB is well below current shipping production SSDs, which are only going to get larger. HRD platters should stack well, though, which bodes well for expandability size-wise. An obvious extension would be to alternate layers of (mobile) r/w heads with layers of (fixed) media, or even allow for both layers to be mobile to allow for longer ‘tracks’.
500MB/s = 4Gbps which is more than SATA-3, but less than (newly spec’d) SATA-6. Competition with SSDs is a Good Thing, though.
We did indeed study MEMS-based (aka probe-based) storage in PDL at CMU. The results were wrapped up in a 2004 FAST paper, and a summary and links to older papers looking at scheduling, data layout, device emulation, and other things are here.
As for scheduling, we concluded that while the magnitude of delays were certainly lower than those of disk drives (especially mobile drives), their behavior was very similar to that of disks. Since they are mechanical, positioning delays are distance dependent, favoring local access. Sequential access is preferred over non-sequential access because it is always most efficient to just keep on moving in the direction the media is already headed.
I don’t know any of the details of the Dataslide devices, obviously, so there may be differences from our conclusions. In particular, if the media is kept in resonance, then the device will behave even more like a disk — the heads will position horizontally and wait until the correct offset in the media arrives (“rotates”) into place. Our models assumed that the device has tighter control of both horizontal and vertical positioning. However, I can imagine that keeping the media constantly moving would make life simpler.
@Jeff –
My guess (and given that they say “800Hz” and “oscillating”) is that the media is constantly vibrating. This provides a fixed addressing schedule, which greatly reduces the algorithm. Data is in linear tracks, with I believe a single head per track. So there aren’t any “settle times” per se – each head is always moving back and forth over its own track.
They can only currently have 64 heads performing IO simultaneously. So, they just need a 64-deep IO queue, and the controller never has to actually worry about activating too many.
I wonder if they can “read” the data backwards, blitting it into cache in the correct order? They must, for 800Hz to be equivalent to “96,000 RPM”.
Basically, all they need to do is keep their queue sorted by linear track offset of the request. Any time the media is nearing the offset of the next item in the queue, prepare that head to service it.
Hmm, I wonder if all bit regions are the same physical size? Or do they “stretch” in the middle where media velocity is higher. Certainly that’d be the simplest prototype approach, as the clock-time per IO would be the same everywhere (this is of course like CAV rotational media, eg HDDs).
The press relase and PowerPoint slide set is a bit misleading on this one – worth going to the comment part of The Register where Charles F J Barnes has added quite a lot more on this. Rather than having 16 fixed heads, it actually has millions held in fixed registration and the amount of movement in each direction is very smalle (about 100 microns).
Latency is a lost better than rotating disk, but not up to the best SSD standards. The oscillations are at about 800Hz which would give average (uncontended) random access times of about 0.6ms which is more than an order of magnitude worse than the best SSDs (but an order of magnitude better than rotating disks). It is impossible to get close to SSD latency figures on traditional shared storage protocols, so for the moment you do need direct bus attach to get the best out of these, but FC SANs are able to get latencies below 0.4ms, so these are a btter fit (we see write-to-array cache times of about 0.4ms measured from the device drive level).
With the need to fabricate millions of heads over quite large surface area (I think perhaps 100+ sq cm) at sub-micron level fabrication, then this is a considerable fabrication challenge. Of course the device is incredibly intolerant of differential expansion as if there were mis-registrations of anything like one micron over the width of the device then it would fail to work, but apparently the type of glass to be use has an expansion coefficient of less than one part in 10^9 per degree.
Producing a device like this cheaply and reliably (and making it reliable) is going to be a huge challenge, and even if they can get this down to that of a 15K FC drive (their aim), then there will be a point where the costs intersect with flash. We’ll see – an interesting approach, but the test will be when there is real product available. Certainly anything that can deal with the stuck-in-a-rut latency of enterprise drives at an acceptable cost is to be welcomed.
Considering DataSlide are using availible tech to build this thing it is Obvious this will cut costs, SSD is great for speed but the lifetime is some what exagerated and shrinkage over time it’s not appealing to me to use if this is slightly slower than SSD but will be faster than standard DISC its still one more step even if its smaller than the average drive in size surely an average user with 2 hdds generic sata and a Data slide holding the OS would be good.
I remember installing and fixing similar technology from Storage Technology Corp (STC) back in early 1980 where the had spinning disks with fixed head to avoid seeks (STC 8350 with fix head disk). The last platter of the disk had many heads on a board clued at the base. The last platter above the heads had special cylinders equally spaced for each heads. No heads motion like the sliding solution.
Unfortunately it was a bad disk after all. Disk and head alignment was hard to do and we had lots of failures at the time. Of course technology changed and this can be solved now.
Now instead of spinning disk they use sliding and more heads. Wow!