Tim O’Reilly has a post Operations: The New Secret Sauce about a discussion with Microsoft’s VP of Operations for Windows Live and how the new massive scale internet centers of Microsoft, Google et.al. change everything:
. . . once we move to software as a service, everything we thought we knew about competitive advantage has to be rethought. Operations becomes the elephant in the room. . . . Here are a couple of the most provocative assertions from our conversation:
- Being a developer “on someone’s platform” may ultimately mean running your app in their data center, not just using their APIs.
- Internet-scale applications are pushing the envelope on operational competence, but enterprise-class applications will follow. And here, Microsoft has a key advantage over open source, because the Windows Live team and the Windows Server and tools team work far more closely together than open source projects work with companies like Yahoo!, Amazon, or Google.
Pardon me if neither of these seem provocative at all.
- Sharing costly infrastructure, especially among small players, is ancient. Think Noah’s ark. Air traffic control. Highways. Telecom.
- Internet scale operations are the cutting edge right now. Smaller scale operators will learn from them. As should all their vendors who want to be in business in 10 years such as storage and open source vendors.
Really, asking Microsoft’s VP Windows Live ops if ops is important, what do you expect: “nah, it’s stupid and a waste of time.” What?
Or asking Microsoft if using their own software is a Good Thing and open source a bad thing: “it sucks. we’re so caught up in our bug-ridden security nightmare we hardly have time to document all the problems.” Is anyone other than Tim surprised? Is Tim even really surprised?
IBM has been bloviating about autonomic computing for years. And their smart guys who were thinking about it were absolutely on the right track. Only problem: when your entire corporate culture is predicated on providing hand-holding to big enterprise data centers, no one is too interested in destroying that business. And when your key software strategy for the last decade is providing software glue (middleware), no one is receptive to upending that business either.
Now Google has built the closest thing we have today to an autonomic data center – and I have no doubt big pieces are held together with chewing gum and baling wire – and has a significant cost advantage over Yahoo, Microsoft and, if they get in that business, Ebay. So now we are waking up to the fact that yes, data center life can be better.
Big Iron storage vendors and their un-indicted co-conspirators, the storage management software folks, have the same problem. They make a hell of a lot of money from providing islands of data and management, and have no particular incentive to change.
Until now. The elements of radical change in the data center are coming together. It won’t be fast, but like a leaking dike, a dozen and then hundreds of rivulets of change are coming together. When the dike gives way much will be swept away, just as the minicomputer companies were in the PC revolution.
He who has ears to hear, let him hear. Hopkinton? Armonk? Palo Alto? Santa Clara?
Sand Hill Road?
Can you give some examples of rivulets of change? I’d be curious to see what your first three are.
Good question – there are many. My top three:
1. Corporate data is cooling fast – meaning standard arrays overshoot most of the market.
2. Web-based services, such as Amazon’s S3, put traditional IT under competitive cost pressure they’ve never had before.
3. GFS and ZFS demonstrate that, once again, the balance of power between elaborate hardware and smart software is changing again, just as it did in the ’80’s with CISC and RISC.
What are your top three?
Rivulets! I like that. Rivulets sound more substantial than “Breezes of Change!”.
My rivulet has turned into a raging stream.
Most IT people are riding the “mechanical bull” because they are still working from the technology side. Technology is enabling but so is a car. How enabling is a car without a driver?
The real driver of change is the Information and the way people want to relate to it. I would add Social Software, Ambient Findability and the User Experience (UX) to the top of the rivulets list.
The telephone companies can not respond to the wireless explosion. They are trying to control the cellular explosion, to their own detriment. Ever talk to a vendor or telephone company about a “wireless” Data Center? First you get the $$$$ dollar signs flashing in their eyes and then they go dull and empty. They don’t have a clue how to start. They just know there has to be a lot of $$$$’s there somewhere.
My “Information” third eye was opened the first time I had to enable some Information in “ad hoc” space that was multiply linked with infinite persistence. That means it always has to be accessible somewhere. Try doing that by throwing NAS or SANs or even hybrid NAS/SAN. My favorite enabling technology is still the “roll your own” hybrid NAS/SAN. It is so flexible! So robust! So powerful! So cheap! Which is why Storage vendors hate it.
Before you can Design a solution that you can have any hope to Implement, you must have a Strategy that fits the situation. The Strategy I recommend, because I have seen it work, is the Pervasive Information Fabric. The magic is in “Weaving the Pervasive Information Fabric using agents and multiagents”.
It actually works well for non-“ad hoc” Information spaces and hybrid “ad hoc” and non-“ad hoc” spaces.
It is totally vendor independent. The generic enabling technology used is determined by the clear Line of Sight from the company Portfolio Management to the Lines of Business (LOB) to the IT Portfolio Management to the Lower Metrics. The Lower Metrics is the first place you might encounter specific vendor “brand name” enabling technology.
This Strategy is enabled by determining the “Speed Limit of the Information Universe” for your company. Once this is determined then a Cost Analysis is run to see if you can afford it. Then the scaling begins.