Commenters on the last post – Open source storage array – helped crystallize an idea that’s been lurking for years: comparing disk storage hardware on per-slot price. The Backblaze box, which costs about $50/slot, got a comment that said, in effect, “it doesn’t have the features of a $200/slot box.” Good!
But the comment raised an interesting point: since we all use the same disks from the same few – and soon to be fewer – manufacturers, isn’t the cost of the tin we wrap them in a key metric? Let’s call it PSC – Per Slot Cost.
- Focus on value-add. We know how many disk slots there are in a storage system. We know how much disks cost. Therefore, the per-slot price tells us what the vendor’s value-add per disk is – or what we’re supposed to think it is.
- Increases pricing contrast. Disk costs are typically 10-15% of the price of a mid-to-high end array. The number of disk slots in those arrays vary, as do individual disk capacities. These variables obscure what the vendor is asking for their value add.
- Cleaner comparisons. As a corollary to the previous point, PSC makes it easier to compare architecturally similar systems – SAS vs SAS, hybrid SSD/SATA systems, RAID 6 systems – whose hardware cost structures should be similar.
- Focus on software value. Since most storage systems – even high-end systems – run on commodity hardware, the biggest price variable is in software. Isn’t that where we should focus?
The cloud storage angle
PSC should be useful for market segmentation. Instead of dumping arrays into entry-level price buckets – such as $75-$100k or $/GB – the PSC should track with the value of the stored data.
Expect to see segments range from Bulk (the Backblaze segment) to Heavy Transactional (traditional big iron) with yet-to-be-named segments between. But the most important use for PSC is in highly-scalable architectures in the public vs private cloud storage arena.
Cloud architectures are distinguished by the fact that the larger they scale, the lower their PSC. This is partly a function of economic necessity – who can afford 2 dozen PB of Symm? – and largely due to their use of software-based object replication instead of RAID.
When your storage is cheap, you can afford triple replication. And when you have massive numbers of boxes – and at least 2 data centers – you can have strong disaster tolerance. So large-scale cloud suppliers have motive and opportunity to reduce PSC.
The private cloud space is where the calculus gets interesting. Many observers dismiss the private cloud concept because they can’t possibly compete with Amazon, Microsoft and Google on scale or cost, including PSC.
The StorageMojo take
There is a private cloud market because there are other issues, such as network latency, and the commercialization of high-scale software such as Hadoop, that make it possible for any focused billion-dollar company to build a competitive cloud infrastructure. The hardware is already a commodity, and many of the improvements that Google 1st pushed, such as more efficient power supplies, are now widely available.
The bigger issue for competitive private clouds is the enterprise IT mindset that lacks the skills to specify and manage them. This is where PSC comes in: it allows CFOs to compare their costs to best-in-breed cloud providers in a simple way.
PSC is just a metric, not the metric. The big guys are optimizing things – like power distribution – that won’t move the needle for smaller players.
But if you use commodity hardware then you should focus on the software. And since every big player is already running on commodity hardware – a Good Thing, BTW – let’s focus on getting software that delivers business value. To the extent that PSC helps decision-makers do that, it will help the industry shift the focus from things like $/GB to a higher-level discussion.
Courteous comments welcome, of course. I just paid $250 per slot for an array with 1 controller, 1 fan and 1 Thunderbolt connection to my 1 desktop. Yes, I could have done better – if I didn’t want Thunderbolt. So PSC doesn’t trump all.