Kaminario has introduced the world’s fastest SAN storage, the K2. If time is money, this is for you.
DRAM
Kaminario’s K2 is fast because DRAM, not disk, is the primary storage. DRAM’s low latency, high bandwidth and durability breaks the tight link between capacity and performance that disks and flash impose. No need for excess capacity to ensure enough IOPS, bandwidth or service life.
The product
Kaminario is a software company. However, they configure customer systems and install the software to order. No home-baked integration here.
The basic hardware unit is a Dell blade server. The blade servers are either I/O directors or data nodes. The Dell server chassis is a passive box – no active components on the backplane – but some customers opt for dual chassis for redundancy out of caution.
I/O directors
The I/O directors use 8 gig Fibre Channel to servers and 10Gig/Ethernet to data nodes. The company says they can saturate both due to proprietary software optimizations.
Using FC switches, each I/O director can talk to multiple servers. Each I/O director can handle 150,000 random IOPS.
Data nodes
Each data node supports up to 288 GB of ECC DRAM. All the data nodes have battery backup and 2 disks for de-staging data to persistent storage. Background de-staging during idle time reduces backup times during power failures.
The minimum config is 2 I/O directors and 4 data nodes with 500 GB of capacity. That’s 300,000 IOPS. They’ve been tested to 10 nodes and 1.5 million random read/write IOPS with support for 16 nodes – and double the IOPS – reportedly coming soon.
Under the covers
The I/O directors are clustered so when 1 fails the others pick up the load. The switched back end 10Gig Ethernet enables all I/O directors to access all data nodes.
The replication default is 2 copies of all data on different blades. Plus copies on disk.
All this runs on standard Dell blade servers. No specialized, low-volume RAID controllers or power-hungry disk shelves.
Software
The secret sauce is the software. Kaminario doesn’t say much about how they do what they do. In any high-performance cluster maintaining metadata coherence across nodes is one of the tough problems.
They did say they maintain hash tables that enable very short updates to all I/O directors after writes. I also suspect they also have implemented a low latency backend update protocol. Metadata serving is distributed across the cluster.
They must also have some creative ways to max out FC links. I’d like to know more.
Management
With storage this fast they say you need little tuning. Lay LUNs across the data nodes and fasten your seatbelt. The software includes optimizations, like pseudo-random block layout to minimize contention, automatic load balancing and demand-based block replication.
If your app calls for it you can tune chunk sizes and set replication policies. Kaminario says K2 is much easier to manage than typical high-performance storage – you don’t have to worry about disk-induced issues like stride.
Management is kept out of the data path on a dedicated GigE network.
Support
Kaminario says they have designed the product and their organization to provide mission-critical Enterprise support. The visible elements from configuration control and software installation to phone home and remote diagnostics back that up.
Who needs this?
If you are hammering a few TB of data for stock trading, real-time business intelligence or TLA government work, this could be the ticket.
Pricing
If you have to ask. . . .
Kaminario has a unique approach: pay for performance:
. . . we price the solution based on the customer IOPS and capacity needs, so basically the way we present such a platform price is by $/GB/IOPS.
I *think* small configs start around $200k. For the performance market price is something like #7 on the list. The first 3 are performance/availability – 2 sides of the same coin, really.
This removes SPEC shadow puppetry between application requirements and storage performance. Of course, you have to know what performance you want. But anyone who’s performance tuning high-end arrays will know that.
The StorageMojo take
Kaminario is opening a new niche at the performance end of the market.
The current Big Storage vendors claim that they too can do a million IOPS. And they can, for millions. A price that makes a few TB of DRAM look cheap.
Since high-end disk – ≈$1/GB retail – makes up 5-10% of the cost of a high-end array, replacing disk with DRAM might be expected to double the cost of an array. But K2 does away with all the low-volume kit – controllers, shared cache, disk packaging and more – and replaces it with high-volume blade hardware. That lowers costs a lot.
Kaminario has opened a new niche: hyper-performance data storage. While a few TB doesn’t sound like much, it is more text than all but the world’s largest libraries place on miles of shelves.
The data arms race has kicked up another few notches. It is more competition for the big iron arrays where they least expected it: at the high-end of the market.
Courteous comments welcome, of course.
I imagine running 288GB of ram on a blade is going to be very $$, blades don’t have many memory slots. Assuming they are using the M910 blade, 32x8GB memory chips gives 256GB of memory, online pricing puts that at about $39,000 (crucial.com).
I wonder what their software could do if they integrated it with HP blades and the fusion io accelerator card. 160GB SLC version seems to have an online pricing of $7,700 And you shouldn’t need to worry about destaging stuff since it’s flash. Perf claims of 100k IOPs per card, and you can have up to 3 expansion cards per blade. Drawing 7.5W per card vs a couple hundred watts for the memory, good savings in power as well.
Hi Robin,
I’m scratching my head to figure out how this solution is better than something like an Oracle Sun F5100 combined with a Comstar server (e.g., Sun Fire X4170) which would give you roughly 2 TB and 1 million IOPS in 2U. Does the K2 have a service-time value proposition that puts it about FLASH?
Sorry, they are entering a niche we have already filled for several years. We have been doing disk and then flash backed DRAM DDR based SSD systems for years. We have from 32 GB up to 512 GB DDR based systems either backed by hard or flash disk with battery backup and have had for the last several years. Overall, we have been doing SSD for 30+ years….
The single 3U unit RamSan440 does 512 GB of DDR storage with 600,000 IOPS at 16 microsecond latency and is backup up with flash. This was released in 2008. Before that we had the 128GB DDR 400,000 IOPS RamSan400 backed with HDD in 2005.
Hi Kevin, Hi Mike,
There is a significant difference.
The new thing about Kaminario K2 is the unique combination of a true enterprise product, ease of use and hyper-performance as Robin articulated above. Kaminario’s secret sauce lies in its revolutionary OS which can benefit from any reliable fast media, combined with off-the-shelf hardware components.
Enterprise grade translates to high availability with no single point of failure and full redundancy of all the hardware components on a system level. It protects customer’s investment by enabling future growth of capacity and performance in the same system.
The capacity starts at 500GB and can grow up to 4TB per enclosure. A Kaminario K2 system can consist of multiple enclosures under a single management for aggregated performance and capacity. Upon hardware failure the system recovers automatically, allowing continued operation of the application.
As for performance, Kaminario K2 provides consistent millions of IOPS and tens of GB/s of throughput under both sequential and random workloads, eliminating the need to worry about either write performance or wear leveling.
In summary, the combination of Enterprise grade, scalable high performance and ease of use is what makes Kaminario K2 appealing to enterprise customers.
[Arik works for Kaminario]
too bad they used Dell blades.
Great looking product. Having worked in the server based computing, virtualization and WAN optimization business for over ten years I have never found a good way to optimize and enhance I/O intensive applications like SQL and VDI(Virtual Desktops). I am assuming K 2 would speed up applications with intensive SQL backends and virtualized environments. Has Kaminario or Storage Mojo done any performance benchmarks on running virtualized workloads such as virtual hosted desktops? There are thousands of customers who could take advantage of this. Very exciting looking product with wide applicability.
But as an Enterprise DDR SSD offering it is still second or even third inline, not first. Maybe the storage heads and software deserve recognition, but not the SSD portion.
TMS RamSans are enterprise level storage and easy to manage. Give more IOPS and lower latency. They can be combined to scale to hundreds of terabytes if needed.
Mike,
All high-end solutions in the data storage world contain at least two controllers. A point to think about when choosing an enterprise-grade solution…
All RamSan solutions (except for the PCIe RamSAn10/20) provide for multiple, multiport FC connections, some also allow IB. Rather than force users to use a controller of our choice we prefer to let them use one of their choice. This makes our solution more flexible and able to be easily upgraded as the users upgrade their systems.
Andrew, we at TMS (http://www.ramsan.com) have been optimizing SQL Server, Oracle, virtualization and any other IO intensive application for years. Check out our user stories and whitepapers.
Mike
Oh, and by the way, if I have to rebuild 5 TB of DDR from disk storage, how long is that going to take? We use flash modules for backup of our high end RamSan440 so rebuilds are 10-20 times faster than from disk based backup technology. This also allows “instant on” at reduced performance during the rebuild.
Mike,
Seems like you did not fully understand my comment about the multiple controllers on high end storage solutions.
Please read the above description again. Unlike RamSan, there is no single point of failure in the K2 system, no chips, no motherboards – nothing – by design.
K2 consists of many data nodes and with complete balancing of the data. A node failure means reconstruction of a single digit perctage of the total capacity, no where near the 5TB you mention.
For a green field install I can see your solution being useful. For existing SAN users I am not so sure as they will already have redundancy. The only single point of failure on the RamSans is the backplane. We always suggest using a RAID1 to a second RamSan to eliminate this single point of failure. We utilize RAID at the chip level, ECC and Chipkill as well as hot spare technologies. We are enterprise level and SAN ready.
Oh, and just to clarify, we have redundant controllers built in to each RamSan. Given that the minimum configuration for the K2 is 2 heads and 4 storage nodes the redundancy is from multiple components and not at the component level. With 4 RamSan440 or RamSan630 components you get multiple controllers, multiple back planes, and even at a RAID1 higher capacity (1 Terabyte or 20 terabytes, and 4, 8, 12 or 16 terabytes in between) and IOPS at lower latency than the K2 provides, in 12u of space.
Mike,
You are comparing apples and oranges. Allow me to explain:
The Kaminario K2 appliance is a black box that utilizes blade server technology. Our operating system, kOS manages these different components under a single entity.
You just cannot compare the basic building blocks of the Kaminario K2 appliance (ioDirectors, Data nodes), managed by a single OS (the kOS) and running as a single system, with integrating several DIFFERENT TMS appliances to work together and expecting the customer to build a high availability solution on top of it.
Arik,
Odd, that is what almost all of our clients are doing with our systems! Our RamSans look just like block level devices to the system, sound familiar? The more complex a solution, the more chances for failure.