The performance increase in individual CPUs is slowing to a crawl. All the easy wins – higher clock speeds, wider datapaths, more DRAM, larger registers and caches, 2-4 cores – have been exploited.

Doctor, is there any hope?
In the recent PCAST report on Federal technological initiatives (see Fed funding for our digital future) (pdf) one sidebar suggested “Progress in Algorithms Beats Moore’s Law.”

. . . in many areas, performance gains due to improvements in algorithms have vastly exceeded even the dramatic performance gains due to increased processor speed.

The algorithms that we use today for speech recognition, for natural language translation, for chess playing, for logistics planning, have evolved remarkably in the past decade. It’s difficult to quantify the improvement, though, because it is as much in the realm of quality as of execution time.

In the field of numerical algorithms, however, the improvement can be quantified. Here is just one example . . . a benchmark production planning model solved using linear programming would have taken 82 years to solve in 1988, using the computers and the linear programming algorithms of the day. Fifteen years later – in 2003 – this same model could be solved in roughly 1 minute, an improvement by a factor of roughly 43 million. Of this, a factor of roughly 1,000 was due to increased processor speed, whereas a factor of roughly 43,000 was due to improvements in algorithms!

The StorageMojo take
Let’s file this one under “Wishful thinking” along with “US housing prices will never decline.” Piecemeal enhancements of specific application areas cannot replace the generalized performance improvements we’ve seen for decades.

No doubt there are important algorithmic improvements to be made. And that in certain problem spaces those speedups will far exceed Moore’s Law – even though the Law is about transistor count, not performance.

That doesn’t change the fact of computation today: the era of predictable and rapid performance improvement is over. Like a vein of rich ore that thins out, our computers will still improve, but the effort needed to do so is rising fast.

Cheap(er) SSDs, larger memories and caches are helping mask the performance plateau by increasing system performance, but reduced I/O latency and increased bandwidth will only take us so far. The way forward is a game of wringing out single-digit percent improvements, not the 2-3 year doubling of the last 60 years.

Courteous comments welcome, of course. The professor whose work the PCAST quote refers to is Martin Grötschel of Konrad-Zuse-Zentrum in Berlin. He’s been doing brilliant work on optimization problems– including the traveling salesman problem and data network design – for decades.