Winding up the week – it is Friday here – in Japan as a guest of Hitachi Data Systems. Fine hospitality from my American and Japanese hosts in steamy mid-summer Tokyo. Looking forward to Arizona.
The practitioners in the group – one who loves XIV, others with EMC and NetApp kit – were surprised by what the HDS stuff does. Such as virtualizing and managing your current storage platforms, regardless of vendor.
Seems like the big guys have been promising that for years. HDS delivered? Whoa.
A couple of things impressed me:
- The senior Japanese execs weren’t the starchy, face-saving guys I’d expected. The Chairman of Hitachi made a speech to about 10,000 people without a tie, and all the other execs I spoke to followed suit. Even giving careful non-answers they came across as relaxed and realistic. Are they also decisive? We’ll see.
- HDS has a clustered object store. I hope to get briefed on it next month.
- The parent company has a vision for using massive amounts of data to improve our quality of life. Since they also produce power systems and high-speed trains they have a direct line into some critical issues.
The StorageMojo take
HDS is a multi-billion dollar company with some leading edge products and technologies. They’re about the size of NetApp – and I know you’ve heard of them.
As their OEM relationship with Sun winds down – or at least I expect it to – they’ll have more direct contact with a new group of customers. Now is the time for HDS to sharpen their messaging and turn up the volume.
Sadly that isn’t likely. The internal dynamics of the company seem to lead to generic messaging that fails to plant a hook. Maybe it is a consensus thing. But they aren’t doing customers any favors.
Courteous comments welcome, of course. Any recent experience with HDS?
Regarding the HDS clustered object store, isn’t that just HCAP (nee. Archivas)? Or is this something entirely different that you can’t talk about?
Hitachi is run by some very smart people and I have come across a few of them in the past. The only problem with Hitachi is ….well they’re too smart for their own good. When you have a storage system so well engineered that only a storage engineer can sell it to another storage engineer, well you’ll always lag behind NetApp and EMC. I think Hitachi needs to implement some good old western marketing strategies to really communicate the value of their solutions. It’s similar to deciding wether to buy a Ferrari or a Formula One Car………
Nate, yes it is the Archivas product, but I *think* they’ve tweaked it. I hope to learn more next month. Stay tuned.
Robin
We just bought their AMS2500 this year after using 9200’s, 9585’s, and a USP100. These tanks ROCK. They’re fast, they’re reliable, and they just work.
My only complaints tech wise is with the GUI’s being slow as molasses. They work great, they’re just so slow.
–Jason
Robin,
Archivas was focused on archive, do you expect the new solution to sustain performance for primary storage as well ? or still focused on long-term data repository ? Any smart search technology included in the product ?
Robin, you’re correct. HCP is the distributed object store that began life at Archivas. When I came to HDS in January of 2009, one of my greatest concerns was to discover how general the design of the product was. If it was designed to only suit the application at the time, namely archiving, then this was going to be disappointing because it’s difficult to tarpaper over bad design with good add-ons. Fortunately, the design was indeed sufficiently generic; i.e. archiving was simply an application of the technology, not the technology itself. This allowed us to broaden the scope of the system to essentially have a single system (and, more importantly from our vantage, a single codebase) that could work equally well in traditional enterprise environments and in a subscriber model (which people now refer to as cloud).
I’ve recently finished a paper that talks about object stores in general (as I’ve worked on 3 of these now: Centera, Atmos and now HCP) and brings some particulars of HCP in as example. That paper is here: http://www.robertprimmer.com/home/Tech_Papers_files/dos-pop-rjp-v5.pdf
I hope to have time to follow this up with a paper that’s more particular to HCP and gets into what changes we’ve made in v3 and how that’s intended to evolve through the next 2 releases.
In the interim, you can just email me and we can talk over some of the detail.
Bob
Hi Robin,
just curious – why were practitioners surprised by Hitachi’s virtualization capabilities? It’s something well known to everyone in storage industry for years.
Damir, that’s the magic of stealth marketing: people don’t know.
Robin
Any new whitepapers out of this meeting you can clue us in on?
Robin, as Bob states the Hitachi Content Platform is about extending the core technology into other markets and to support other use cases. I’ve written quite a bit about HCP in the past on the blog, and one of them relates to it being our wolf in sheep’s clothing. We recognized that the foundation was sound and could be easily shaped to solve interesting problems like say cloud storage… Thanks for coming to Japan and I hope that Arizona’s dry heat is treating you well.
Chris Mellor at The Register has a similar take on HDS:
http://www.theregister.co.uk/2010/07/26/hds_enigma/
though he attributes most of the problems to a Japan vs the West culture clash.
RE: “I am always torn between making the “boxen†really smart or keeping them really dumb. The “big-brained†people feel the best paradigm is “dumb boxen†and very intelligent edges. That seems to be where we are headed.”
HDS Storage is very smart boxen that appear dumb by design.
Their software contains a wealth of thought. Almost labyrinthian in its complexity. It is not “User Friendly” unless you have the same mindset as the guys who wrote it. If you could just extract it and make a few changes – Oh! Joy!
RE: “commodity hardware – servers, unmanaged switches, SATA drives – will be knit by cluster software that may even be open source. It is “enterprise†because an enterprise is using it.
This why all the big iron vendors are migrating their software from embedded firmware to stacks running on commodity processors and operating systems. For the mainstream market the commodities are fast enough and the economics are compelling.”
In my “Best of All Possible Worlds” I would have HDS boxen with the ability to extend their software to manage commodity boxen. A Storage Controller Appliance. Since HDS will never do this the only hope is that some bright software company will reverse-engineer the HDS software so it will run on Open Systems. And be more “User Friendly”…
I have been thinking that it is time for a new way of organizing Information and its Storage Technology.
The Google “knol” concept is very interesting as a way to do this.
In “Primary Storage” a “knol” might be implemented by an HDS boxen connected to many commodity boxen or no HDS but all commodity boxen. The HDS software feature/function set would have to be somewhere.
If you are a “Technology Centric View” person then tiering looks really good. If you are an “Information Centric View” person the “knol” looks good. With virtualized Storage, stored Information can be grouped quickly in ad hoc space “knols” and dispersed just as quickly. A Managed Unit of Information could easily be included in as many “knols” as need it.
Hence the need for high replication and internal network bandwidth. It might take 10 terabytes an hour or more. Instantaneous would be ideal.