Several years ago an Intel briefer promised me $50 10Gb Ethernet ports. The shocker: prices have dropped little in the last 8 years – well more than a decade in Internet time.

I don’t look back as often as I should. But a note from a ZDNet reader prompted some retrospection and research into network prices.

Consumer adoption is behind the most dramatic price declines in high tech. CPUs, disks, monitors, printers and GigE got cheap when hundreds of millions of people were buying them.

Network effects
Who has a 10Gig home network? Some StorageMojo readers, no doubt, but the rest of the world is sitting on its hands.

This is seen in some remarkably frozen specs. Apple introduced its first GigE system in 2000. 15 years later GigE is still standard on Mac systems – along with, now, 20Gb/s Thunderbolt.

Obviously, Apple’s – and everyone else’s – networking investments have been going into Wi-Fi, not Ethernet. Consumers are willing to pay for the convenience of faster Wi-Fi; not so much for faster Ethernet.

That shows in the pricing. The lowest priced 10Gig PCIe adapter online is about $100. Same with the lowest cost Infiniband adapter.

But here’s a point off the curve: the lowest cost 20Gb Thunderbolt adapter: $72. The adapter does require motherboard support.

But the StorageMojo question is: what impact has the price/performance differential had on datacenter architectures?

Thumb rules
The StorageMojo rule of thumb is that storage and networks are about 80% fungible. If we had infinite bandwidth, zero latency networks, 80% of today’s storage would be in the cloud; if we had no networks 80% of today’s storage would be local.

Another rule of thumb: 80% of the cost of an array is bandwidth, not capacity.

Direct-attach
The net effect favors direct-attach storage. PCIe, introduced in 2003, started out at 250MB/s per lane and is inching its way to 2GB/s: an 8x improvement in ≈10 years.

Fiber Channel, over the same interval, has managed a 4x increase in bandwidth to 3.2GB/s, with higher rates projected in the next couple of years. Assuming, of course, that sales support the engineering costs.

10Gig Ethernet has been spec’d for 10 years, but 40 and 100GigE uptake has been slow due to the cost of processing high data rates. 40GigE is seeing more uptake, but is hardly common. Essentially, Ethernet bandwidth has been static for much of the decade.

Network upgrades have always been lumpy. The order-of-magnitude uplifts are more difficult than a CPU speed bump. But we’re seeing something more than uptake indigestion.

The StorageMojo take
Backplane bandwidths have increased much faster than network interconnect speeds. Coupled with PCIe flash storage and the bandwidth’s low cost, this explains much of the rapid growth in direct-attached architectures and shared-nothing clusters – and the growing problems in the SAN market.

The larger problem seems to be that semiconductors aren’t getting faster. We needed GaAs to make gigabit FC work 18 years ago but now it seems even that isn’t fast enough for 100GigE.

The performance crunch is affecting CPUs most visibly. But if CPUs can’t go faster and crunch more data, maybe we don’t need faster networks either.

In any case DAS seems to have long-term legs. The advantages SANS had 16 years ago are evaporating as DAS improves and networks stagnate.

Courteous comments welcome, of course.