I thought I learned a lot on my college debate team, but nothing prepared me for the rhetorical devices of high tech marketing. Especially the ones employed by technologists.
Next bench marketing, meet next think tank marketing
The classic model of successful technology marketing is the engineer who builds a cool device for the engineer at “the next bench”. The inventor doesn’t need to do market research or think about his audience because he is the audience.
As technologists step away from what they know . . .
A really smart, articulate and experienced technologist is one of the most dangerous marketing tools. Deep expertise, a winning personality and an uncanny focus on favorable technical issues creates an aura of technological inevitability that seduces even the most hardened customers – at least until the next presentation. Ideally the CTO/architect is well-versed in customer problems, but it is really hard to be as smart about customers as one is about technology.
I differentiate architectures from “marketectures” because the latter are usually post hoc creations designed to justify random product “solution” agglomerations. A true architecture is designed in advance of implementation.
Architecture vs. Implementation
Superiority through architecture arguments are often both attractive and misleading. If an engineer can do for a dime what any fool can do for a dollar, an architect can make a dime’s worth of engineering sound like a dollar.
There are at least three problems with architecture-based arguments:
- Architecture discussions always entail unstated assumptions and hypothetical use cases that may not reflect the real world. Application requirements tend to morph due to feedback from users. Architectures aren’t so flexible.
- The architecture may deliver exactly what it promises to a market that doesn’t care.
- Architecture is only a small part of the product a customer buys. Implementation quality, support services, integration ease and compatibility may easily trump a superior architecture.
The high-tech product conundrum: fast, cheap or good – pick any two. Fibre channel storage networks implicitly assumed that they would be similar to ethernet networks, only faster and more reliable. Yet low overhead protocols that cut latency also limited flexibility and functionality, much of which has been added by going to TCP/IP storage. As data cools, access time becomes less important than access cost, where Ethernet wins.
Ready, fire, aim
USB 2.0 is spec’d at 480Mb/s, versus Firewire’s 400Mb/s. Yet the latter is faster for storage use and the former vastly more popular, probably due to restrictive Firewire licensing. In either case the mass market wanted “fast” and both do that acceptably, so Firewire’s superior architecture and performance are a non-issue.
Architecture quality doesn’t equal product quality
Back in the 1980’s Motorola’s 68000 32-bit microprocessor architecture was widely acknowledged as superior to Intel’s 8080-based designs with their clumsy addressing and complex instruction set. But Intel beat the pants off Motorola because the developer tools and support were better and Intel aggressively tried to fill every processor market niche with a suitable product. Motorola had the superior architecture, but Intel made sure it had everything else that made processors desirable.
The StorageMojo take
Take architecture arguments with several grains of salt. Just because something is a stunning technological achievement or blindingly smart doesn’t mean it will do what you want at a price you can afford.
In my view architecture is less important than implementation, yet implementation is much more difficult to evaluate. And implementation is less important than the entire product package. Enjoy your next architecture pitch, and just remember that somewhere some equally smart is figuring out another angle and that, in the end, it is the application that delivers the goods.
Comments always welcome. Moderation turned on to keep spam from overwhelming the site.
Another great post.
Motorola has some problems getting the 68K to production … but then there was NatSemi 32000 and MIPS . did you know that MS NT was running on MIPS before it was available on Intel..?
Longer-term AMD may equal the score… running the same application software wit RISC in the ‘basement’ emulating Intel.
Intels ‘implementation’ can easily be evaluated and compared against AMDs much ‘better architecture’.
I’d forgotten about NatSemi’s 32000 family, but your reference reminded me why I’d ever even known about it, since DEC wasn’t using it, and evidently no one else was either. NatSemi used a marketing technique I call the “long bomb” in a last ditch effort to save the family.
Every so often you’ll see a vendor come out with a really long product roadmap. Not two years or three, but four to eight years. NatSemi did that with the 32000. A really elaborate one with four or five generations, cost-reduced models, stretch models, man, it looked fabulous. Then they pulled the plug six months later. Since then I’ve seen that tactic at least a half-dozen times and it signals failure every time.
Kind of like a politician announcing they are behind someone “100%”. Usually means two weeks or less.
Actually the recognized pioneer of the commercial SMP (the Sequent Balance 8000) was based on the NS 32032 … so “someone” was using it 🙂