NAB made a big impression on me
I’ve been putting off writing about SNW because I’ve been busy. I still am. But the successful procrastinator has to use every excuse possible, and writing about SNW is my last one.

Paul Calleja
Paul is responsible for delivering HPC services at Cambridge University. He has an interesting take on HPC issues because he not only follows the technology, but he is also responsible for figuring out how to make it pay for itself under a government edict. Somebody paid to bring him over to SNW, and he told me who, but I didn’t write it down and now I can’t remember. So he may be shamelessly plugging someone and I wouldn’t know it.

Paul started using HPC for molecular modeling 14 years ago and has been in charge of the Cambridge HPC center for 18 months.

Some observations – as I typed them, which may be different than what he actually said –

  • His budget:
    • £2 million capital
    • £2 million operating cost
    • over 3 year life
  • MPI (message passing interface) programming is a crock – you’re lucky to get 20-25% of peak performance 10-15% is common
  • Big problem is storage: how do you connect thousands of processors to hundreds of TB?

The good news: Paul believes that Microsoft and Intel will figure out how to make parallel programming work on CPUs like the 80 core monster Intel demo’d a few months ago.

He’s also interested in the fact that GPU power is doubling every 12 months, while CPU power is doubling every 18. That means that in three years CPU power will be up 4x while GPU power will be up 8x. In six years those numbers are 16x and 64x.

Power factors will always bite you.

Unrated storage blogger picture!
Anil Gupta tried to get a group of storage blogger’s together which I missed because of NAB, but I did meet Anil, a very nice fellow, and Tony Pearson and Clark Hodge. Scroll down for the picture. That glazed look in my eye: I’d just had a couple of martinis with an i-banker.

Sun posse ambushes naive storage blogger
I’d scheduled a meeting with Nigel Dessau, Sun’s new storage sacrificial lamb marketing VP. My reputation preceded me, as it was 6 against 1.

The good news was that I didn’t recognize a single one of them, which meant some housecleaning had occurred. To put them at ease I noted that Sun storage had lost market share for 10 years straight under at least three GM’s. So what would be different now?

Nigel responded after only a few milliseconds of a clenched jaw. His story, in bullets:

  • Sun/StorageTek merger has led to a simplification of the storage organization:
    • The IO stack and its related software engineers have been moved to Solaris
    • Device management is owned by the storage group
    • Since the IO stack is attached to Solaris, and Solaris is open source, Sun is moving to open source storage
  • Nigel then outlined the first three of Sun’s four part open source strategy
    1. Solaris picked as a general purpose software platform
    2. Now, make Solaris the best choice for running any storage regardless of what the applications are running on
    3. Make that software downloadable open source
    4. Monetize by TBD or maybe TBA – my notes are unclear

And he noted that Sun’s QFS acquisition turned out well.

I was favorably impressed by the group, and the fact that a couple of the women play poker with ZFS’s architect. So I’ll ratchet down the Sun storage (self-inflicted) threat meter from RED to ORANGE level. I hope I don’t regret it.

Data Domain
The charming Beth White and equally charming Ed Reidenbach took time away from more important things to meet with me as well. Data Domain is on a roll and well they should be since they are going public. I may yet take a look at their S1, but no promises.

DD now has 750 customers and has shipped over 2200 systems: disk-based backup appliances and diskless gateways.

I still don’t get why the industry refers to “de-duplication” rather than compression – why use a well-understood term when you can invent a new one? – but they did make the point that compression rates depend on your data types, backup policies and retention policies. Basically the more stuff stays the same the higher your back up compression rate.

So the 20x – 50x – 300x backup compression number arguments are a bit silly. It sounds like DD has some good technology, like their Data Invulnerability Architecture – does that come with a warranty? – as well as some up-to-the-minute features like their ability to search email archives – handy for figuring who knew what when about stock option back dating or fired US attorneys.

Any readers using DD kit? I’d love to hear about your experience with it.

Update: W. Curtis Preston takes me to task for confusing de-dupe with compression. He didn’t change my mind, but he makes some good points in a well-written post.

The StorageMojo take
The coolest thing at SNW was that I heard from a number of up-and-coming vendors that sales have really started to move – things like 100% year-over-year growth. That says to me that some of the new paradigms are finding traction with buyers – the reason we do all this stuff. And that is the best news of all.

Comments, corrections, clarifications welcome. And a cool storage product is coming out next week. Come by Monday for the details – or as many I was able to weasel out of the CEO.