Why we’re getting vertical – again

by Robin Harris on Monday, 18 May, 2009

Until the 1980s the computer industry was characterized by a vertical integration of the major players. They produced their own CPUs, operating systems, applications, networks, peripherals, interconnects, and in some cases clusters.

With the advent of the PC and Ethernet the industry had for the first time a high-volume computer and network. The IBM PC’s use of a Microsoft operating system and an Intel processor in an inadvertent open architecture set in motion a new set of economic forces that in less than 10 years drove several billion dollar plus minicomputer companies out of business.

Likewise ethernet volume drove the development of very low-cost networking components. With the broad acceptance of the TCP/IP protocol stack the die was cast and existing network architectures, including IBM’s Systems Network Architecture, DECnet and new ones that used token ring or token bus architectures were crushed.

Increasing volumes drove cost down the learning curve. Intel, after getting out of the DRAM business, used its gusher of money to drive process technology faster than any of its competitors could. Board and system vendors, able to concentrate on a single CPU architecture, drove their costs down creating an economic implosion that wiped out most competing chip architectures in less than a decade.

Likewise, Microsoft’s DOS and Windows operating systems, became effective standards for the high-volume computer business. Application vendors either migrated to Microsoft or died along with their minicomputer hosts.

The horizontal industry
In a decade the structure of the industry was radically changed: large vertically oriented computer companies such as DEC, Wang, Prime, Data General, CDC, and most of the seven dwarves ceased to exist. In their place arose a new group of horizontally integrated companies such as Intel, Microsoft, Cisco, Oracle, SAP and, in services, IBM Global Services.

In this new world the battles took place within these horizontal layers: Intel versus AMD and spark; Microsoft versus Linux; Oracle versus Informix and mySQL.

The vertical reintegration
But after 2 decades where it was obvious that horizontal integration was the winning strategy, we’ve seen a U-turn towards vertical business models again.

  • Cisco moving into servers – and with that big warchest, how about storage too?
  • EMC buying VMware to offer virtual servers. And selling real servers in Atmos.
  • Oracle buying Sun and saying they’ll offer fully integrated HW/SW systems.
  • Apple stocking up on chip guys.
  • Other buys, like HP buying LeftHand and EDS, Dell & EqualLogic, IBM & XIV, that point to more integrated offers.

What’s going on?
Ask yourself what drove companies horizontal.

  • Economy of scale. $10B companies could maintain credible R&D on all the pieces, but smaller companies couldn’t – creating a market for vendors who focused on 1 layer.
  • Standards. Whether de jure or de facto, standards such as TCP/IP, MSDOS, Netware, SCSI and IDE opened people’s eyes to multi-vendor infrastructures and freedom from lock-in.
  • Distribution costs. Dedicated account teams can keep CIO’s happy, but down market the margin dollars aren’t there for that kind of handholding. Enter the VAR channel and the distributors who support them.
  • Increased capital intensity. With multi-billion dollar chip plants, coming investments for patterned media & HAMR, 10 & 40 GigE, small companies just couldn’t afford to stay in the game.
  • Margin cherry-picking. Disk drive vendors did the work, but the array products got the margins. Likewise, Intel got great margins while server vendors didn’t – and the same with Microsoft and Cisco.

What’s changing?
The dynamics are fascinating. This is only a partial list.

  • Wall Street. If you want a higher stock price you need to show Wall St. that you can and will grow. When, like Cisco, you dominate your segment, what else can you do?
  • Vertical is cheap. Companies are cheap right now. Lots of open source software. Fabless semiconductor design, commodity infrastructures, scale-out storage and computes: it just isn’t that expensive to move up the stack.
  • Best defense. Cisco served notice on HP’s and IBM’s server business. EMC is plucking the high-margin software from commodity servers. Oracle could be packaging up dedicated app/database/server/storage racks and containers. Or sell you a service that does most of the same stuff.
  • Solutions, not products. Sun made great building blocks – but customers don’t have the people to put them together and VARS aren’t getting the margins to do it for them. “I want an X that will do Y” is the customer demand. Package it up and win the sale.
  • Shrinking margins. There is no shelter from this storm. Cisco knew its free ride was ending as IBM and HP look for more revenue – see “best defense” above.

The StorageMojo take
Blood on the streets. More M&A. Shifting battle lines.

“Co-opetition” is shifting to plain old bare-knuckle competition.

Vendors can say goodbye to 60% gross margins. Point product diversity will increase until the product landscape stabilizes. That will be about 5 years.

Courteous comments welcome, of course.

{ 5 comments… read them below or add one }

Jerry Leichter May 19, 2009 at 6:04 pm

There’s another side to this great cycle: Many decisions that seemed fixed for a decade or more are open to debate again.

For quite some time, it was clear that *the* OS was Windows (in most uses) and some Unix variant in a few others. Now, Windows is under attack in its areas of greatest strength from Mac OS. Virtual machines running on bare metal – with the OS just some kind of funny application layer – are a whole new alternative. Even Microsoft’s research organization is going public with an incompatible post-Windows OS.

It was “settled” that strongly typed languages were the way of the future, and Java was the answer to every programming question – except in certain specialized areas where C/C++ were the answer, and of course for the Microsoft specialists, there was always C# (larger in actual use than its mind share). Now, for many application areas, Ruby and Python and other weakly-typed languages have taken over. Even functional languages are making a comeback as actual, practical programming tools.

Depending on whether you were lower-end Unix, Windows, or very high end, you accessed remote files with NFS, CIFS, or an FC SAN. That’s all becoming mixed up now, with iSCSI as the new guy on the block, competing with all three.

The x86 architecture so far continues to reign supreme, but ARM is a viable alternative – and the two are overlapping, which never happened before. For the first time in years, many applications need to worry about portability to multiple architectures. (There have been rumors about an ARM port of Windows. They don’t make a whole load of sense, but it’s hard to imagine anyone even bother to repeat such a rumor a couple of years back.) Beyond that, the emergence of GPU’s as general-purpose programming resources also destroys the “everything is an x86” mindset.

The outlier here is networking, where we are still seeing convergence of distinct technologies. If you think about it, why do we have USB, Ethernet, WiFi, Bluetooth, FC, Firewire? Yes, they all differ in some important characteristics – but those increasingly overlap. The newest Bluetooth will shift over to WiFi-style connections for large transfers. Firewire is likely fading. Ethernet will kill FC. So at these levels, networking is in the convergence phase, not the new divergence. On the other hand, TCP still has no challengers. My prediction: It *will* see competition. It’s impossible to say exactly where and how – but it would be very odd if TCP were a unique, eternal fixed point with everything below and above it changing.

Dmitry Afanasiev May 20, 2009 at 4:05 am

TCP (actually, whole TCP/IP stack) has it’s share of problems and some future competition is probably already here. Next 5-10 years are going to be very interesting:
– location/ID split. How about endpoint based mobility and multihoming? Process migration across L2 domains and increased availability through connections to multiple providers without pain of BGP may become easy – should be very relevant to everything cloud. Just don’t forget about another layer of indirection and ID lookup.
– IPv4 addresses are runnig out, but there is no decent solution for interoperability between v4 only and v6 only hosts so far.
– DNS is still the name resolution protocol for the Internet, but DHT is finding more and more applications.
– IETF is working on congestion control protocol for background transfers, and Bittorent Inc is actively involved: .

Walter May 20, 2009 at 11:20 am


This is analogus to predator prey systems and such complex systems are unstable and oscillate .


Rex May 20, 2009 at 12:27 pm

What’s your take on how the 3 GHz/core wall, and multi-core CPUs might affect vertical integration? Old apps won’t get automatically faster by moving to new hardware, so there might be some incentive to rework old apps to make them more efficient, or rewrite old apps to use multi-core CPUs. New apps that need lots more speed through multi-core CPUs will be very expensive to develop — especially since this is an open research problem. Some apps simply can’t take advantage of multi-core CPUs and might require radical re-thinking.

Similar issues come up with disk and RAM I/O bottlenecks.

— Rex

Pete Steege May 21, 2009 at 10:58 am

Great post Robin. I think the chum is in the high end waters, but not so much in mainstream IT. The proprietary approach works well for clouds, mega data centers and single sourcers. Not so well for your average IT shop that’s got a bunch of stuff they have to keep working.

Leave a Comment

Previous post:

Next post: