Microsoft has VMware in its sights and there will be blood. Kevin Turner, Microsoft’s COO, gave what reads like a real barn-burner speech at the MS partner’s conference.
He covered a lot of ground, but this stark warning to EMC/VMware was the toughest. He clearly sees that they can beat VMware and knows exactly how to do it. He’s addressing partners, hence the focus on margin and investment.
We launched [Hyper-V] in October of last year. Since October of last year, we’re at over 24 points of market share from the day we put it in the marketplace. . . . The R2 product that we’ve got coming out gives . . . . them live migration with R2 and keeping the same value proposition.
And so I hear a lot about margins, hey, the VMware program’s got higher margins. Well, if I charged you US$58,000, I’d give you higher margins too. That’s not what we charge customers. We charge them US$9600 and that’s why our margins are what they are. So, as you line up to make your bets, if you want to bet against us in this particular space, we’re coming right at you. . . . 100 percent of what we’re putting in our Microsoft datacenters today is virtualized using our product. And so we’re very, very excited about what we do.
. . . we’re going to get this virtualization tax, the VMware tax out there and start driving people crazy with the value proposition. . . .
And I’ve tried to be very, very candid and honest with this comparison so that there’s nobody in the room thinking we have our head in the sand and we don’t understand our opportunity or where they’re better than us. We do. . . . And we’re just going to keep improving the product and keep growing the market share.
But this is a space we’re not going to lose in, ladies and gentlemen, and so as you think about where to put your investments, we encourage you to take a hard look at our value proposition because we want our customers to pay less and get more . . . .
Market vs feature vs benefit
Are virtual machines a market or a feature? Or a benefit required by the steady march of Moore’s Law? (Too bad there’s not a Moore’s Law for applications – forcing them to use 2x the cycles every 18 months.)
Turner is treating virtual machines as a market – and until everyone has roughly feature equivalent implementations – it is. But someday VM capability will be standard on every major server OS. And then what?
The StorageMojo take
I remember when virtual memory helped define a market segment: superminicomputers. Vendors compared VM implementations and helpfully pointed out the problems in the other guy’s architecture.
Today virtual memory is not a differentiator. You have to have it. It operates in the background and 99% of all users have no idea it exists, let alone what it does. (Although with RAM at $10/GB, maybe we don’t need it anymore.)
IF virtual machines become widely used on the desktop for, say, security, then the technology will become a feature, included at no extra charge. If servers remain the focus, then vendors will be able to milk it for at least another decade.
But who will do the milking? If Microsoft can maintain their focus, I’ll bet on them.
Courteous comments welcome, of course.
Software is getting slower more rapidly than hardware becomes faster.
— Wirth’s Law, http://en.wikipedia.org/wiki/Wirth%27s_law
See also Gates’ Law: commercial software generally slows by fifty percent every 18 months. (Not coined by Bill Gates, but rather for him–or presumably his products.)
We’ve heard the same fire-and-brimstone from Cisco about Riverbed, and plenty of analysts predicting Riverbed’s demise at the 800lb Gorilla Cisco.
What’s happened there? Cisco keeps improving their product, but then so does Riverbed, keeping them significantly ahead. In fact according to Gartner, Cisco is slipping instead of gaining.
I expect the same here- Microsoft is about hypervisor the release the equivalent of vmotion which Vmware debuted when? 4 years ago? As long as VMware keeps innovating, it will keep them far, far ahead of Microsoft. Microsoft keeps touting the price difference, but they aren’t equivalent products. The free VMware server is probably the closest thing to hyper-v. Microsoft will never uncouple the from the OS, and that will be their demise.
It’ll be interesting to see (in 15 years) how the legal battle over this plays out. VMWare will push the anti-trust angle again.
–Jason
Robin – I pretty strongly agree with you. MS is a relentless competitor, and they will keep on coming. There are few companies that I can think of that have not only withstood MS, but done so enough to get MS to leave the market (Intuit comes to mind). I definitely can’t think of anyone in the systems software world.
MS has done a good job getting in the data center by undercutting the competition on price and improving their products over time (e.g. SQL Server). They will do the exact same thing again here…except they can bundle with Windows more effectively.
It’s very hard for me to see anything but the future you painted, given how close to MS’s core expertise this is. VMware will no doubt try to innovate on management and on superior support and integration for Linux…but it may not be enough to differentiate.
Also, you have the free VMMs on the Linux side as well (KVM, Xen, etc.).
DK
Virtual memory is great as long as you don’t actually use it. I think I need to qualify that – having large amounts of virtual memory available is extremely useful in that it means that programmers don’t have to be careful about the way that they perform memory management any more. All that memory allocated just in case and perhaps never looked at again gets consigned to disk. If I’m to be a bit cynical about this, I might remark that virtual memory is essential to allow us to run all those programs with memory leaks for a sufficiently long time.
However, once you get into anything approaching a significant level of demand paging, then performance collapses. It was bad enough in the days when processors operated in the few tens of MHz range – now we are running a couple of orders of magnitude faster, yet disk latency is only better by a factor of perhaps a factor of 4 or 5, then significant paging is something of a disaster.
Of course virtual memory should not be looked on purely as a stuff written to disk. Virtual memory mapping is far more important in a logical context. It allows us to share read-only pages, map multiple 32 bit execution spaces into far larger physical memory spaces. It also allows for very lerge memory-mapped file systems and all sorts of other techniques which ease the life of writing programs. That’s the true legacy of virtual memory – not that of extending physical memory onto backing store, but providing a virtual memory mapping system which enables simpler programming models.
Hi Robin,
I recently did some work on a Hyper-V deployment at a large household name in the UK. VMware was also being pushed. MS did a Hyper-V presentation for the customer and could not get it to work during the presentation (seriously) but still won the business on….. you guessed it, cost!
The customer fully expects MS to improve and innovate, which I’m sure they will, and see an opportunity to get on the VM bandwagon, with some of its benefits (not as many as VMware yet), at a good price.
I think it will take time (years) but MS market share will continue to creep. They are a long way behind and although they keep talking about features in R2, I’m pretty sure it has not shipped yet and even when it does will have the usual teething problems and be somewhat of a let-down. Still, VMware had better be innovating like crazy.
While the VM market is booming at the moment, I’m not suprised MS are undercutting to gain market share. Its a lot harder to displace somebody once they own the market or have a customer, then it is to stop them owning the market or winning the customer in the first place.
VM technology in the x86 x64 space is really interesting these days. Bring it on!
Amusing aside on the virtual memory topic, it harks me back to the days of the new-fangled 32-bit superminis, and of DG upgrading AOS to AOS/VS (virtual storage), possibly the most imaginative rename in history. And Tracy Kidder’s book “Soul of a New Machine”. Good times.
So, your comments (and some follow-up comments) made me reflect on why virtual memory made sense and the current “cool new beans” of the storage world. So I ask…
Is Virtual Memory (25+ year old technology) much different than the current killer apps, “thin provisioning” and “automated, tiered storage”? I mean, same idea, pre-allocate a bunch of pretend space to avoid space limitations and the need for careful calculations… and then move little-used bits to the slow side, and high-use bits to the fast side. And it all goes south if you have more active bits than you thought or planned for. Deja Vu, Isn’t this where I came in?
I wonder who first said “The more things change, the more they stay the same”. I would give him or her kudos, except that I suspect that the first version of this bit of wisdom was chisled into the walls of the Pyramids some 4K years ago (decimal or binary, take your pick)…