It’s been a rule of thumb for the last 30+ years that any functionality implemented in hardware will surely migrate to software. But that is starting to change.
At the beginning of a new application – say RAID controllers – the volumes are low and the trade-offs poorly understood. Perfect for FPGAs, which are relatively costly per unit, but flexible and easily updated.
Once the application is better understood the cycle-intensive bits can be optimized and hardware accelerated in ASICs, which are cheaper than FPGAs in higher volumes.
But the movement to all software comes when CPUs are fast enough to run the software without extra hardware acceleration. Most RAID controllers have been all software running on standard x86 CPUs for the last 8 years or so.
There’s a new sheriff in town
Sheriff Moore – and he’s telling you to slow down.
Moore’s Law – the doubling of the number of transistors on a chip every 18-24 months – is still hanging on. But the assumed performance increase that has accompanied that is not. Haswell processors are barely faster than Ivy Bridge and the next gen Broadwell performance improvements will be marginal as well – except for graphics.
Transistors no longer get faster as they get smaller. So extra transistors are used to speed up common functions. Codec acceleration. Fancier graphics.
Great stuff! But it means that today’s FPGA-based hardware is much more likely to remain hardware. And the comforting assumption that we can put all the cool stuff in software real soon is no longer operative.
The StorageMojo take
The entire infrastructure world’s acceleration is slowing down. CPUs aren’t much faster. Storage densities aren’t improving as they were 10 years ago. Network speeds and more importantly costs aren’t dropping as they were.
Several things have obscured the trend and blunted its impact: multiprocessing; NAND flash SSDs; 10GigE uptake; massively parallel GPUs; scale-out architectures. But those too are less and less effective.
The current mania for Software Defined Everything relies on the availability of unused hardware infrastructure resources. That was VMware’s original value prop. But over the next decade – unless there are some fundamental breakthroughs not now visible – that will change.
It’s not the end of Software Defined Everything, but you can see it from here.
Courteous comments welcome, of course. If you see it differently, why?
That also means there is going to be a movement towards efficient code, eliminating layers of bloat and sloppy coding. Most likely this will require collapsing layers together, and the database layer at least migrate into the storage array.
I wouldn’t be surprised if within 2 years we start seeing arrays thar are basically Hadoop clusters in a box, with disposable hardware cartridges holding RAM, ONFi-connected Flash and an arm64 CPU in the same form factor as a 2.5″ SSD. That’s pretty much what SATA SSDs are anyway. The backplane they would slot into would be a 10G switch fabric without the cost and power requirements of an Ethernet PHY. SATA Express is essentially PCIe. I don’t know if a PCIe switch rather than bus is possible, but that would be an alternative interconnect.
Fazal, isn’t that a lot like HP Moonshot?
It will take a while before the end is really in sight.
There is currently still enough space to optimize the software and cut out software and hardware layers. (think moonshot and the likes).
FPGA’s are still software 🙂 (http://www.myhdl.org/)
I’m not betting any money on dedicated storage cpu’s for the foreseeable future. RAM will bigger and compressed RAM has the benefit of relaxing the memory path bottleneck as well.
Disk will only be there for recovery and that needs way less capacity than the current way of using storage.
When Amazon and Google switch to dedicated storage processors in their clouds its time to start thinking about it again, but that will take a long time imho.