Another scale-out storage vendor bought

by Robin Harris on Tuesday, 11 May, 2010

Harmonic is acquiring video production infrastructure and storage provider Omneon for $274 million. They’d raised about $100M since their founding.

Omneon Video Networks is a specialized storage company that provides broadcast quality storage for digital media, along with the gear needed to convert video streams to bits. They do clustering, in their MediaGrid product, a sophisticated architecture that can handle a 7×24 beating.

Founded in 1998, venture-backed Omneon started offering storage in response to customer demand. They chose a commodity-based cluster and built their own storage software, MediaGrid, whose architecture hews to the post-array Google-style storage model:

  • No RAID – slices are replicated one or more times based on policy or demand
  • Single global namespace
  • Out-of-band meta-data servers manage content servers

Omneon’s content servers do more than serve content. They put their unused CPU power to work doing jobs like transcoding – translating content from one format like HD to iPhone-suitable QuickTime.

The StorageMojo take
Omneon is more than a storage company, but their storage made them a competitor to Isilon in the broadcast market. Harmonic is big in the rest of the video workflow, especially distribution in multiple formats. It looks like the 2 firms complement each other nicely.

Omneon was not a pure play storage company. But the fact that they were able to build a competitive storage product as an adjunct to their main business points up how low the barriers to entry are in scale-out storage.

Courteous comments welcome, of course. I’m still at EMC World. YottaYotta’s technology is front and center in the VPLEX product. More on that later.

{ 4 comments… read them below or add one }

John May 13, 2010 at 4:02 am

or is it more of a question, what to do with all the cores, as posited above, and nothing else?

So: if you have a vertical, especially one such as transcoding, which is ringfenced by lots of IP & licensing, where cost and non-transferability of licenses beget inertia; it makes sense, and even is functionally imperative, to have cpu-task-local / DAS storage. I imagine here one file copy gets transcoded to format x, another format y. Elegant simple parrallelism. Necessary when you’re post for a major show to be delivered in umpteen formats, let alone the soon to be “flash does not fit all” web.

This is the way forward certainly for non – archive media stores. The leverage in broadcast, where engineering skills are high but profitably aimed elsewhere, is obvious because the sales channel is historically deep, and intertwined with a plethora of gloriously high – margin gear*. I’m thinking of how long Sony took to finally let its storage division fade.

So, my question is, how well does this idiom scale out to other markets?

I mean, excusing explicitly parrallel research / tree – style DB query / Monte-Carlo type jobs, what’s left? Is that enough to see every remaining independant storage vendor be picked up by a vertical or a HP?

I see the problem, in general terms of core multiplication, being cpu interconnect bandwidth. It’s cheaper to buy (build) a single socket 12 core Opteron because extra lanes mean cheaper DIMMs, regardless of compute need. Want a lot of simple disks attached? How many PCI-e lanes can be dedicated? They also scale with the cores.

I now get from the latest AMD chips, oooh, about ES40 levels of system bandwidth . . (OK, sorry, too much hand – waving sarcasm)

Is it all – too – early fear of (heaven forfend) appplication vendors who (avast!) know what their customers need to do (and so have slightly fewer problems at the data semantics level), which drove notionally suicidal forays into lock-in software management licensing and black – box closed – tin? I’m looking backwards, but without hindsight, as i’m quite uncertain as to the overall direction, and even the silliest of storage related licenses is mitigated by fork-lift upgrade cycles which might only be stymied by error rates.

I’m wondering, in hope of cauterizing my last paragraph, whether someone couldn’t parlay rules such as retail banking’s “Know Thy Customer” into “Know Thine Data”, and reclaim for actual users the logic required to manage what they purport to depend upon.

And will we be able to chart a correlation between core accretion and verticalization of storage at the application level?

And what, if anything is being done at the FS level? Does anyone remember ODS2? Or note that NTFS neatlly does ODS2 things such as file[1].file, file[2].file, just not the diffs (unless you count reparse points)?

There seems to be an existential crisis as to where to put the logic, and that suits very well the merry – go – round of selling intermediating hardware and calling out “yer FS is over THERE! No, really,over there [points vigorously] . . . over the LAN/WAN/MPI/Non-OSI Layer “x” . . no . . no . . i’m sorry, you can’t have an API with that . . . it’s the CLOUD, see??”

Please forgive me my first comment here, and now speaking directly to Robin: sorry for lurking and THANK YOU FOR THE SANITY! I first came across this site 3-ish years ago seking to allay the anguish of rebuilding a hosed production 5-array where Adaptec firmware had wiped all too much metadata for comfort. I feel i can say that now they’ve left the building. . . a storage company which doesn’t / didn’t document file table loss in the event a power cycle intervened on a drive disconnect, ya serious guys???!!! Now i can add a ” :-)” and also comment there really is such a hole in one’s heart as knowing a little too much**.

kind regards & thanks to all, from this fool who built TB arrays a decade ago, for fun and kicks, (and profit, someone claimed) and who – for the record – is *not* IT line function in my business!

all best from me,

– john

*think not of what the “market size” is, as a naive VC pitch might be, say that “video = youtube + weddings + . . ” but instead the much smaller market defined by “so who pays top dollar for the very best production because they either care or are at their professional height?”

** at “rust – level”

John May 13, 2010 at 4:37 am

Epi-commenting, unfortunately:

regards running encoding transforms on replicated data, in the above scenario, i’d hate to have to decide where locks and integrity checks get administered, assuming those same copies act as part of a kind of raid-z. This is also assuming that per storage node, there is a single copy per hardware POF, and then idle cycles work on that single copy. That’s a lot of trust in your application code. Once again, what’s the FS being used locally, and can it do anything to help? Is there actually an FS or just a bunch of metadata pointing out to locations, or is it deeper? I’d want to hear that in the first presentation . .

also apols for forgetting ods2 file version numbering. sometimes you just (try to) forget how something happens and get on with real work. Like trying to avoid HP – proprietary (or n – proprietary) storage for vms installs by solving non – linear “Capital M Mission” vs. “redundancy solves it for so many sigmas” equations 🙂

all best,

– j

Duke May 16, 2010 at 12:37 pm

It is interesting to know what is your take on Harmonic ?
Do you think that this was a smart move on their part?

ST3F October 29, 2011 at 3:55 am

What is th File System used by Omneon ?


Leave a Comment

Previous post:

Next post: