Commoditizing public clouds

by Robin Harris on Wednesday, 8 June, 2016

I’m a guest of Hewlett-Packard Enterprise at Discover 2016 in Las Vegas, Nevada this week. I enjoy catching up with the only remaining full-line computer company. HP was a competitor in my DEC days, and since the Compaq purchase they incorporate the remains of DEC as well.

One of their themes this year is multi-cloud infrastructure. The multi-cloud is the Swiss Army knife of cloud implementation: private; public; managed; and, for good luck, incorporating those bits you can’t or don’t want to migrate to any cloud.

HPE says they have a wide array of software and services to enable multi-cloud implementations, whose enumeration will be left as an exercise for the reader. I’m more interested in this as a competitive response to the public cloud.

HPE’s cloud integration software supports the major cloud providers as well as a customer’s private cloud. Cloud brokerage is on the horizon, allowing customers to automagically get the lowest cost cloud service for a given workload.

The StorageMojo take
Today, of course, Google, Microsoft and AWS have very different strengths and capabilities. But to the extent that customers have common cloud needs – and to the extent that cloud providers care to respond to competition – they will tend to converge over time.

That convergence is another name for commoditization. And if HPE and others encourage their customers to play cloud providers off against each other based on price, that will shift the market’s center of gravity.

This isn’t a 24 month shift, more like 10 years. Yet as we look at future arc of public cloud adoption, we start to see how current vendors can effectively respond.

Short answer: public clouds are not the inevitable winners over enterprise data centers. The battle will continue to evolve, for the benefit of all of us, if not all vendors.

Courteous comments welcome, of course.

{ 4 comments… read them below or add one }

Andy Lawrence June 9, 2016 at 11:06 am

I wonder how much this approach will have in common with database platforms. A number of companies have tried to make their software ‘database agnostic’ so that the underlying database can be swapped out and a different one put in its place without significant code changes.

Even though the various database systems out there (MySQL, Postgres, SQL Server, Oracle, etc.) have converged on a ton of features; there are still a lot of differences between them. It is not trivial at all to switch out your database vendor and migrate all your data from one system to another. This is especially true if you use ANY feature that your existing vendor has implemented to distinguish itself from its competitors.

If you want to prevent ‘vendor lock-in’, you have to code to the most vanilla feature set that all the vendors support (even if your vendor is an open source solution). This might put your product at a competitive disadvantage to other products that are fine-tuned to a particular database system.

I can see the same dilemma for those who use various cloud platforms.

Jerry Leichter June 10, 2016 at 6:07 am

This calls to mind an interesting bit of historical trivia: Back in the day, DEC provided compilers for a wide variety of programming languages. In almost all cases, they added some proprietary extensions. But DEC manuals for programming languages – which were really excellent – always pointed out the extensions very clearly. One edition of the FORTRAN manual put documentation of extensions over a blue background – blue, of course, being DEC’s color in those days. I don’t recall for certain, but I believe IBM manuals took a similar approach.

I can’t recall the last time I saw any vendor make it clear what were extensions and what were standard features. Sun man pages for Unix were among the first major setters of this trend. Today, the sales and marketing stuff will call out significant “enhancements” – but the technical documentation, the stuff that developers will use as a reference – pretty much treats everything the same way.

I’d be curious of any examples of the old-style practices that people see around today.

Actually, I know of one example: GNU man pages for command line utilities and such are generally very good about pointing out which options and behaviors are standard and which are GNU extensions. (In fact, they generally have a way to request “Posix standard behavior” – as some of the older compilers did. Such options have become quite rare in commercial software, too.)

— Jerry

Andy Lawrence June 13, 2016 at 12:19 pm

Lowest Common Denominator == Stagnation

There are a ton of advanced features that various vendors would like to put into their products and have users take advantage of. But if everyone only codes to the lowest common denominator, advanced features never make it in the market.

This is one of the reasons why file systems have been largely stagnate over the past 30+ years. If nobody will code to any kind of extensions, why have them? Only those features that fit into a strictly POSIX set will be accepted by the marketplace.

Extended attributes never really caught on. Not all file systems supported them so almost no one used them.

Paul Blitz November 24, 2016 at 3:05 am

Talking of multi-cloud infrastructures, the cloud is definitely here to stay and make life simpler for us. It’ not just documents and files, but did you know that now video marketers can backup entire Youtube channels? That means not being at Youtube’s mercy to store your videos and be professionally crippled when it randomly deletes your channel. Check this:

Leave a Comment

{ 1 trackback }

Previous post:

Next post: