I was asked at the SNIA nonvolatile memory conference why I did not include virtualization as a major driver for the use of nonvolatile memory. Flash helps with the multiple virtual machine I/O blender problem.

But we also had that problem when we were running multiple applications on a single machine. Yes, performance requirements were lower, but we managed.

I consider virtualization a feature and not a market. Why is virtualization a feature rather than a long-term product as many keen observers believe?

VM history
In the 1970s we had the virtual memory operating system wars. A number of companies, including DEC, Prime and IBM, developed virtual memory operating systems.

The key enabler of the virtual memory operating systems was the advent of 32-bit processors with true 32-bit address spaces. This is back when people programmed on minicomputers with a quarter of a megabyte to a maximum 4 MB of physical memory.

The VAX/VMS (Virtual Address eXtension/Virtual Memory System) OS, in contrast, offered a massive 4GB address space, 2 of which were reserved for the system and 2 for users.

Virtual memory operating systems enabled important changes for the industry. Software developers could focus on making their software as functional as possible without worrying about the underlying memory architecture.

The virtual memory wars continued into the PC era, when MS-DOS was limited to a 640KB address space and Quarterdeck systems came in with QEMM. But eventually Intel and Microsoft got their act together and solved the problem of virtual memory for the masses.

No one pays extra for a virtual memory operating system today. It is expected functionality that most don’t even know exists.

Like 32-bit processors the advent of microprocessors powerful enough to run multiple applications led to the virtual machine opportunity. If Microsoft had not been intent on selling as many server licenses as possible, they could have improved their multitasking so that the problem of server sprawl might never have occurred.

Be that as it may, there is nothing in today’s virtualization technology that could not and should not be incorporated into server operating systems. In 20 years new CompSci grads won’t know that virtual machines weren’t always built into the OS.

Feature vs product
Now just because something is at bottom a feature rather than a product doesn’t mean that you can’t make gobs of money before it becomes common. VMware is one example.

Data deduplication, for example, is clearly a feature. But the founders of Data Domain were able to make a lot of money by exploiting that feature with a high-quality application-focused implementation and being first to market.

Now deduplication is being built into new products. While debates over implementation details will continue among engineers, in a few years most users will see deduplication as a checkbox item.

The transient and the permanent
How do we distinguish between a feature and a product? It is the difference between the transient and the permanent.

Transient problems can be resolved. Permanent problems can only be managed.

Processor virtual memory management is a problem that has been solved for the majority of the world’s computers. Data de-duplication can be added over time to storage systems as computing power increases and the cost of bandwidth and random I/Os – thanks NVM! – drops.

But some problems can only be managed, not solved. The issues of scale, aggregation and metadata – among others – will always be with us.

Like gas-rich regions of galactic star formation, these manageable-but-not-solvable issues will continue to be areas rich in startup formation.

The StorageMojo take
Applying this theory to current markets yields some predictions:

  • VMware, despite its feature-rich ecosystem and early lead, will lose to vendors, such as Microsoft and Red Hat, who can incorporate the most important virtualization features into their OS. VMware has no OS to fall back on and thus has no long-term future.
  • Data Domain is a wasting asset. As others add dedup to their products, DD’s differentiation will decline along with its market value.
  • Scale-out storage, like Isilon, will remain a lively market segment as the economics of disk, NVM, software, aggregation and metadata keep changing the calculus of efficient and durable storage.
  • The improving economics of erasure coding will enable more efficient approaches to backup and archiving – as long as Moore’s Law continues to hold.
  • System management is a permanent problem. When we get autonomic management at one level, the problem just kicks up a level with increasing scale.

Just as no one remembers the critical register and segment management skills required for 16-bit minicomputers, in a decade or so all the painfully acquired knowledge required to manage VMs will lose value as it gets built into the OS infrastructure. But there will always be new worlds to conquer.

Courteous comments welcome, of course. How would you define a feature vs a product?