Geeky computer guy that I am, I have my machine instrumented with programs (Mac users: MenuMeters) that tell me all kinds of useless information. Network usage, memory usage, CPU load and, of course, disk activity.
Mostly all this stuff just tells me that the machine hasn’t crashed. But sometimes it tells me something surprising.
Cache out, laid-off, says he’s got a bad cough, wants to get it paid off – look out kid
Like my virtual memory page usage: pageins; pageouts; page faults; copy-on-writes; and cache hits and misses.
Get this: 5,292,427 cache lookups and only 32,860 cache hits – a measly 0.6% hit rate. Why bother?
What is “virtual memory” anyway?
If you know the answer, skip ahead.
Back 30 years ago, when RAM cost over $1,000 per MB, people were particular about how much they bought, even on big machines. Virtual memory extends physical RAM with disk capacity. Typically, least-used memory pages are swapped out to disk. If a document’s memory pages are sitting on disk, they get swapped into physical RAM once you start editing it again.
Data dynamics have changed
“Locality of reference” is the behavior that give cache -and virtual memory – their power. Locality of reference is the empirical observation that once a piece of data is accessed, it tends to be accessed again several times, maybe even hundreds of times. So it makes great sense to keep that piece of data close to the action until demand for it falls off.
That’s the theory. Yet if data accesses are near-random, you’ll see what I see: almost no cache hits. Which means the overhead of cache management is buying nada.
“Locality” doesn’t matter if you don’t “reference”
Data is cooling. Vast amounts of data are being stored as storage prices decline, and the number of data accesses per megabyte is steadily dropping. And that’s a good thing since disks accesses per megabyte are dropping too.
What I hadn’t thought about, and I haven’t seen discussed anywhere else, is the impact this change must have on system architecture. Much effort has gone into making cache mechanisms, including second and third level caches, virtual memory, system caches and disk caches, fast and efficient. Yet, if you use your system the way I do mine, much of this effort and overhead is wasted.
Expensive array assets are becoming less valuable
Many applications, such as databases, do exhibit high levels of locality of reference, and they probably always will. But for unstructured data, how valuable is it to spend good money on costly caches and the associated engineering for a resource that may return very little value?
The StorageMojo take
As scale-out storage architectures continue to evolve, engineers will need to look at the workloads they are designing for to determine the most cost-effective means of supporting them. The cost-adding “cache everywhere” architectures – disk, network, system, and more – may actually hurt performance while adding cost and complexity. It is another nail in the coffin of the traditional disk array.
Its something worth thinking about the next time you lay down cold hard cache cash.
Comments welcome, as always. Comments moderated, because moderation is a virtue, except in the defense of liberty.
I updated this article by shortening it and adding a gratuitous Subterranean Homesick Blues reference. My apologies to Mr. Dylan.
Hi Robin,
I remember thinking a similar thing about virtual memory a while ago when building myself a Linux box. So I ran it for a while and kept an eye on memory and virtual memory usage (it wasn’t operating under a major load, more like just ticking over nicely). Then I stuffed a load more memory into it and watched to see if it made any difference and to my surprise it made very little difference if any. So I dug around it a little and found that the memory management system in the 2.4 kernel was designed to use virtual memory pre-emptively even if it wasn’t required. You could have shed loads of memory available and still see quite high usage of your page file. Supposedly just in case things got extremely hectic and you didn’t have to sit around and wait until real memory was freed up.
Im sure the same will be true just about every other OS, especially where I live in the open systems world.
As for traditional monolithic arrays and their huge cash architectures…… I think we are already seeing the more modern data centres, with their unstructured data, looking for something a little different to what the more traditional data centres (banks, large government agencies etc) require. We recently talked about this in an article titled The MySpace storage monster on rupturedmonkey – http://blogs.rupturedmonkey.com/?p=53
However, and Im thinking on the spot here, although I agree that cache might not be as useful as the marketing brochure leads us to believe, having it there is still not doing us any real harm. E.g. Ive been thinking about the increasing cache capacities in monolithic arrays and wondered how much time is lost searching cache only to find your data is not there and needs to be fetched from disk. However, this is almost insignificant in comparison to actually getting the data from spinning disk.
Given the technology advancements of the last few decades, I don’t really care if my cache is under untilised, I’m more amazed that my computer still comes with a spinning disk (I know they are cheap but Im a Mac user so don’t mind paying for the good stuff). In fact here’s an analogy Ive just thought up – I have my new all singing all dancing MacBook Pro with all the mod cons and optional extras. Yet at the heart of it is a 5400RPM SATA disk. Now imagine this – I have a nice new BMW M6 (I don’t) with Satelite Navigation, DVD players in the back, climate control, all the features you can think of, but I have to drive it with my feet like the Flintstones!!! You just wouldn’t!
Anyway, apologies as Im starting to rant a little 😉 Good post though
Nigel,
Thanks for sharing your experience with Linux memory management. I assume that these bits are so deeply embedded in the OS, and so well wrung out, that no one wants to touch them. There is no doubt that there is overhead – keeping track of page usage and demand, swapping, the disk I/Os, must add up to something – but they’ve been part of the landscape so long that maybe only a few crusty old realtime systems guys even have a handle on it.
The cost issue is a little trickier, IMHO. Certainly it is visible when cache is an extra cost option on a disk, an array, or an HBA. What isn’t so obvious, and is more difficult to quantify, is the engineering effort that goes into, for example, cache coherency in an array or across a network. The basic mechanisms are well understood. It is the 2% corner cases that probably absorb 60% or more of the engineering time. Maybe I’m barking up the wrong tree here and there are no significant savings to be realized. Yet it just seems like one spot that could simplify and cost-reduce massive storage.
I like your comment about the 5400 rpm drive in your new MacBook Pro. That’s one place the SSD will be a big win.
Robin