A video networking company in StorageMojo?
Omneon isn’t new to StorageMojo. Their price list has been on price list page since January 2007.
Their booth was about 50 yards from Isilon’s and EMC’s and it was a madhouse each time I walked by. Partly that was because they were holding all their meetings there, but it also seemed like there was lots of traffic.
Building storage into an app
Founded in 1998, Omneon started offering storage in response to customer demand. They decided on a commodity-based cluster and built their own storage software, MediaGrid.
Their architecture hews to the post-array Google-style storage model:
- No RAID – slices are replicated one or more times based on policy or demand
- Single global namespace
- Out-of-band meta-data servers manage content servers
<strike>They can rebuild a failed 1 TB drive in less than an hour.</strike> They can replicate the data from a failed 1 TB drive in less than an hour.  Just add 4 or 24 drive content servers to scale capacity. <strong>Update:</strong> My original wording was incorrect. Thanks to Bill Todd for elucidating Omneon’s mechanism.<strong> End update.</strong>
But that’s not all!
Omneon’s content servers do more than serve content. They put their unused CPU power to work doing jobs like transcoding – translating content from one format like HD to iPhone-suitable QuickTime.
Given the growth in multi-core processors that will become a more important part of their market appeal over time. Since they process files, not blocks, they have many more opportunities to add value than a modular array.
The StorageMojo take
Omneon made a lot of smart choices with their MediaGrid architecture. It shows how a company with a few bright engineers can build a basic storage utility to take advantage of low commodity costs.
Where they win is their integration with the application and the workflow. They’ve created a video utility that integrates ingest, post, media management and playout with the smart and scalable storage needed to make it all work.
Application specific storage writ large. They’ve taken the same storage the rest of us use and wrapped broadcast interfaces around it that broadcasters already know.
Comments welcome, of course.
Robin,
Rebuild of one TB per hour requires almost 300 MB per sec of average write speed to disk surface…not possible to a single disk.
Richard –
The way it’s normally done is by distributing the redundant data of each disk across a *lot* of its fellows (rather than to just one other disk as with conventional mirroring – e.g., its mirrored data would instead go a bit to each of the other disks). Then if a disk fails, virtually *all* the other disks can participate in rebuilding the lost mirrored data, and it can be rebuilt across *all* the remaining disks (i.e., all the combined bandwidth of the remaining disks can be devoted to both reading and writing).
Thus you can rebuild the *data* from a failed 1 TB drive at almost arbitrary speed, because you’re not limited by a single drive’s bandwidth.
The down-side of doing this is that RAID-1-style multiple-failure probability becomes RAID-5-style multiple failure probability: once one disk fails, the failure of *any one* of the remaining disks will (likely) result in data loss. This is to some degree offset by the fact that the mean time to repair (i.e., restore the previous level of redundancy) may decrease drastically, potentially nearly completely offsetting the increased probability of data loss – but that does nothing to protect against multiple *simultaneous* failures (e.g., as might occur after a severe power glitch).
On balance, the additional benefits of such scattering tend to make it attractive (e.g., it allows the use of distributed free space for ‘hot spring’, thus allowing all drives to contribute to the system’s performance rather than leaving some standing by idle to take over if needed). But it’s still a somewhat complex trade-off.
– bill