I was asked at the SNIA nonvolatile memory conference why I did not include virtualization as a major driver for the use of nonvolatile memory. Flash helps with the multiple virtual machine I/O blender problem.
But we also had that problem when we were running multiple applications on a single machine. Yes, performance requirements were lower, but we managed.
I consider virtualization a feature and not a market. Why is virtualization a feature rather than a long-term product as many keen observers believe?
VM history
In the 1970s we had the virtual memory operating system wars. A number of companies, including DEC, Prime and IBM, developed virtual memory operating systems.
The key enabler of the virtual memory operating systems was the advent of 32-bit processors with true 32-bit address spaces. This is back when people programmed on minicomputers with a quarter of a megabyte to a maximum 4 MB of physical memory.
The VAX/VMS (Virtual Address eXtension/Virtual Memory System) OS, in contrast, offered a massive 4GB address space, 2 of which were reserved for the system and 2 for users.
Virtual memory operating systems enabled important changes for the industry. Software developers could focus on making their software as functional as possible without worrying about the underlying memory architecture.
The virtual memory wars continued into the PC era, when MS-DOS was limited to a 640KB address space and Quarterdeck systems came in with QEMM. But eventually Intel and Microsoft got their act together and solved the problem of virtual memory for the masses.
No one pays extra for a virtual memory operating system today. It is expected functionality that most don’t even know exists.
Like 32-bit processors the advent of microprocessors powerful enough to run multiple applications led to the virtual machine opportunity. If Microsoft had not been intent on selling as many server licenses as possible, they could have improved their multitasking so that the problem of server sprawl might never have occurred.
Be that as it may, there is nothing in today’s virtualization technology that could not and should not be incorporated into server operating systems. In 20 years new CompSci grads won’t know that virtual machines weren’t always built into the OS.
Feature vs product
Now just because something is at bottom a feature rather than a product doesn’t mean that you can’t make gobs of money before it becomes common. VMware is one example.
Data deduplication, for example, is clearly a feature. But the founders of Data Domain were able to make a lot of money by exploiting that feature with a high-quality application-focused implementation and being first to market.
Now deduplication is being built into new products. While debates over implementation details will continue among engineers, in a few years most users will see deduplication as a checkbox item.
The transient and the permanent
How do we distinguish between a feature and a product? It is the difference between the transient and the permanent.
Transient problems can be resolved. Permanent problems can only be managed.
Processor virtual memory management is a problem that has been solved for the majority of the world’s computers. Data de-duplication can be added over time to storage systems as computing power increases and the cost of bandwidth and random I/Os – thanks NVM! – drops.
But some problems can only be managed, not solved. The issues of scale, aggregation and metadata – among others – will always be with us.
Like gas-rich regions of galactic star formation, these manageable-but-not-solvable issues will continue to be areas rich in startup formation.
The StorageMojo take
Applying this theory to current markets yields some predictions:
- VMware, despite its feature-rich ecosystem and early lead, will lose to vendors, such as Microsoft and Red Hat, who can incorporate the most important virtualization features into their OS. VMware has no OS to fall back on and thus has no long-term future.
- Data Domain is a wasting asset. As others add dedup to their products, DD’s differentiation will decline along with its market value.
- Scale-out storage, like Isilon, will remain a lively market segment as the economics of disk, NVM, software, aggregation and metadata keep changing the calculus of efficient and durable storage.
- The improving economics of erasure coding will enable more efficient approaches to backup and archiving – as long as Moore’s Law continues to hold.
- System management is a permanent problem. When we get autonomic management at one level, the problem just kicks up a level with increasing scale.
Just as no one remembers the critical register and segment management skills required for 16-bit minicomputers, in a decade or so all the painfully acquired knowledge required to manage VMs will lose value as it gets built into the OS infrastructure. But there will always be new worlds to conquer.
Courteous comments welcome, of course. How would you define a feature vs a product?
That’s an interesting and insightful way to frame this issue. I agree, VMware’s position–keeping one step ahead of their competition–will succumb to the law of diminishing returns, and inertia and FUD will one of the few levers holding back their customers from defecting.
I take issue with the potshot at Microsoft with the multitasking comment, though. The problem of server sprawl is not really about OS multitasking capability. After all, the impetus to virtualize those servers in the first place was their lack of resource utilization (don’t forget that “NT is VMS reimplemented” and fully capable when understood and treated properly). The real issue is maintenance, dependencies, and bugginess of the various applications running on top. If you run two applications on the same box, decide to upgrade one, and the upgrade crashes and burns, you run the risk of having to take the other application offline when the vendor requires full access to the box to resolve the issue.
At the time – the 2nd half of the 90s – stability of MS Server with more than 1 app was not good. There was a sweet spot where 1 x86 CPU + 1 pizza box server ≈ 1 average app’s performance requirement. But Moore’s Law marched on and a single server could handle multiple apps – but only if the OS would stay up. VMware solved that problem.
VMWare knows this: witness their attempt to get people to write apps against their VM layer with NO OS. Hypothetically, it’s actually a pretty cool idea from a security perspective. However, the people with the ability to write solid code at effectively the kernel level are too few on the ground to make this a reality.
–Jason
I was there too, and I disagree with that oversimplification. People were trying to do things with a wide variety of commodity hardware that was previously done on relatively very limited-run, proprietary hardware, and some of it just didn’t match. Microsoft undoubtedly shares some of the blame–moving the video, printer, and other drivers into kernel mode (in NT4, later moved back again) is only a good idea if you can absolutely trust those drivers. And therein lay a major problem–aside from very dubious software and driver quality (and apparent lack of knowledge about how to write for NT), the hardware manufacturers, presumably in the interest of driving down cost, swapped components and made minor revisions on what they obviously felt were commodity parts. These small differences never show up on a specs list, since the parts SKUs didn’t change. You’d end up with servers where you’d take turns round-robin rebooting them, and others with no issues.
I’m not saying NT didn’t/doesn’t have issues. I just don’t find the characterization that it was/is unable to multitask as accurate, nor do I believe that Microsoft somehow intentionally hobbled multitasking so that they could sell more licenses. There other practical and tangential reasons why it just wasn’t done.
@Jason: I’ve heard of VMware’s plans to do that, but I don’t quite understand it. No OS? Kernel-level programming of apps? No real APIs? It really just sounds like creating another OS, only this one has better app isolation but is really horrible, inefficient, and ultimately unrealistic to program against. You’re right–who would sign up for that?
It sounds a little bit reminiscent of Novell’s ability to run NLM apps back in the day. It’s hard to see it going anywhere.
Ryan, I didn’t say “. . . that it was/is unable to multitask. . . .” I said that the stability of MS server with more than 1 app was not good. I could have qualified that further by noting that that was a customer perception, but that is what led many to choose the 1 server/1 app model.
Whether that was Microsoft’s “fault” is open to discussion. Clearly, they choose a horizontally integrated business model, rather than the vertically integrated model of earlier vendors – almost all of whom failed – so it could be described as a systemic failure with shared responsibility. I’m not convinced though, because if you want to play in the data center, you play by their rules.
By not meeting data center needs, MS left the door wide open for VMware to come in and fix a problem that, IMHO, should never have existed.
Robin
Virtualization as a feature is what MS, Redhat and Sun/Oracle have been pitching for a number of years (around 5 years or so?).
The same companies have been incorporating advanced storage features (iSCSI, snapshots, dedup, clustering) for as long (longer?) but I don’t see many enterprises dropping their enterprise storage vendor for MS/Redhat/Solaris. Granted some companies go the route of Linux/Samba/NFS on open hardware but that group isn’t in the market for VMWare either.
As long as VMWare provides…
– a multi-OS virtualization solution
– that is more manageable and scalable
– which doesn’t have exorbitant licensing (*) compared to Redhat/MS and compared to non-virtualized.
customers will stay with them.
(*) VMRAM licensing fiasco.
VMware certainly has some challenges ahead as it doesn’t own the entire stack with OS being a key piece. Virtualization adoption has been driven by the goal of maximizing utilization of CPUs by running extra VMs on the same physical host. While virtual memory mgmt is part of the solution, I am not sure if it is the only thing. As we see increased integration of system mgmt functionalities in virtualization mgmt tools, it helps move the discussion away from hypervisor to reduced complexity and reduced cost of running servers.
Because Microsoft servers + apps were so buggy, the “one server per app” attitude polluted the entire computing market for 20 years.
For example, security rules were written that *required* one server per app, despite decades of good security on better operating systems. Which led to virtual machines and VMware, as if another buggy layer of software would magically stop one compromised app+VM from attacking another, running on the same device.
Then we piled on more stupidity by creating virtual firewalls to protect virtual machines from each other (which solved only part of the problem). And layers of software to manage proliferating virtual machines.
Because virtual machines required so much overhead to switch context, Intel had to add instructions to their chip architectures to make VM context switching faster.
And VMware wants us to move more functionality into their layer, like storage management, device drivers, and apps. What’s wrong with this picture?
And on and on. One buggy operating system with buggy application software drives billions and billions of wasted dollars and and hundreds of person-years of wasted talent. Of course, one person’s waste is another’s market opportunity.
Maybe, someday, Microsoft operating systems will catch up to the state-of-the-art from the 1960s-1970s (e.g. Multics), with real security and real application isolation.
But almost 20 years after the release of Windows NT on July 27, 1993, I’m not holding my breath any more. And almost no one will remember what stable, secure operating systems are capable of doing.
I like VMware’s minimalistic hypervisor approach and I think they are (currently) on the right track. I’m not sure what are they planning for next versions though to make me want to buy the upgrade (other than new OS versions support). Most common hypervisors are already free to use (ESXi, Xen, Hyper-V Server) and the money is in the management, automation and data protection of VM-s.
If so why are the IBM aix wpars not used by customers?
That could be easily improved to fit your wish list.
Very interesting. I’m not far from buying into this line of thinking after reading this. If you look at all the seemingly unneeded VMware layoffs, it might seem that VMware too believes this. However, VMware has three tricks up it’s sleeve (currently) that aren’t mentioned:
1) VMware is de-emphasizing the Hypervisor and focusing on managing virtual machines instead. If Vcenter managed RHEV and Hyper-V like it does ESXi, VMware could stay in the mix.
2) VMware, like Apple, has a very loyal user base. They promote their certification heavily and IT shops look for these certs when hiring.
3) VMware is entering new markets left and right as they try to become the “software-defined datacenter” and not just the virtual machine company. Extending beyond your core competencies can be a risk and needs to right “mojo” to succeed – Robin, before you ask, the answer is “yes” – Cisco has the right mojo to succeed at UCS 🙂
I think you’ve overlooked [or at least failed to discuss] why vmware is a product:
Current intel cpu families are not designed to support virtualization [certain instructions introduce problems]. This means that implementations have high costs and complicated failure modes. They can mostly work, but there’s laborious issues to be dealt with. You can also buy a hardware solution from Intel.
In other words, vmware is not just “selling virtualization” – they are selling a software implementation of intel virtualization.