I've been a Mac user since the beginning, and by far my biggest frustration is the perpetual running-out-of-RAM, even when I close basically everything. I have 4GB of RAM, and frequently catch kernel_task using at least half of it.
As another Mac user since "the beginning", I could smell that rotten smell; slightly at first with SL, then thick and rancid with Lion. So my Macs are now back on 10.5 Leopard, and I'm not missing a thing, the lousy performance in particular. Unless ML turns out to be hugely superior, I might be stuck on 10.5 until I get a machine that simply can't run it anymore.
In the meantime, I'm also hedging my bets, and I've gotten very comfortable with Windows 7 for productivity (ok, it's really for gaming) and Ubuntu Linux for web/LAN serving.
Why shouldn't the kernel be using as much memory as possible? It's not like big disk caches or what have you cause your memory to go bad? As long as you get it back when you need it, who cares?
The problem is it then swaps to disk when you use an application. I'm fine with 100% memory being used at all times, but it needs to actually be used and preferably by whatever needs it the most.
The issue is just transparency. You want to know how much memory you actually have available for use if you need it. How would you like it if your car's gas gauge was close to empty all the time because the car was caching gas for long trips?
It would seem there's a simple solution -- another number on the system monitor displaying how much memory is available for use if needed.
Flawed analogy: you use up the gas and your car stops. You use up memory and … the kernel swaps pages around. Now, if the kernel isn't giving you back memory, that's a problem, but the OP doesn't actually show that this is happening.
No, it captures what I want to worry about. I do not want to be in the situation where processes that I am interacting with in real time are paging stuff out to disk. This is really bad for my user experience.
So if my memory is "full" with a bunch of just-in-case stuff, I'll gladly swap it out for real data that a real running process is using it. But if it's "full" of data in use for running processes, then I want to think twice about opening a new application. And I want my memory manager to tell me the difference between those two "full" cases.
Recent Windows NT kernels, recent Linux kernels and recent Darwin kernels will drop disk cache pages the moment something more important needs them. Memory management in modern kernels can be very complicated and just because it appears that a process does a lot of paging it doesn't necessarily mean that it needs more physical memory.
The notion that disk cache is so ungodly important that the OS will SWAP MY APPLICATIONS OUT TO DISK PRESERVE IT boggles my mind a bit.
Users of desktop systems clearly don't like this behavior, in fact they'll do crazy things like purging disk cache via cron every minute to try to stop this from happening.
Process A (let's call it Safari) allocated 600MB of memory. Out of this 600MB, it hasn't used 400MB for quite a while (because, for example, it contains data for tabs you haven't looked at for hours). Now I'm not sure how Darwin does this but I know for a fact that Windows NT kernels will try to write the contents of in-memory pages to the disk at the first good opportunity; this way they save time when the pages in question will really get paged out to the disk. I assume that there's a similar mechanism in Darwin. So it's very likely that the 400MB in question is already on the disk. Now the user starts process B (let's call it Final Cut Pro) that reads ands writes to the disk very heavily, and typically the same things. It's not an unreasonable thing to do on the kernel's part to just drop Safari's 400MB from the physical memory and use it for disk caching Final Cut Pro. Throw in a few mmaps to the picture and suddenly it's not obvious at all which pages should be in the memory and which pages should be on the disk for the best user experience.
>It's not an unreasonable thing to do on the kernel's part to just drop Safari's 400MB from the physical memory and use it for disk caching Final Cut Pro.
The problem with this line of reasoning is that a large amount of cache will often not give you much more benefit than a small amount. Indeed, that's the nature of caching: you get most of the benefit from the first bit of cache, but the level of added benefit drops dramatically with more cache.
What if using 400MB of cache for FCP only gave 5% of a net performance advantage over using 40MB of cache? Would it still be worth it to take away that extra 360MB from Safari?
And there's the issue of human psychology: people deal much more easily with a little slowdown spread evenly than with a full-on stop for a short amount of time (even if the full-on stop scenario gives you greater average performance). I'd prefer Aperture run 5% more slowly than it might otherwise, if that meant I never saw a beachball when running Safari.
This is a very good point and I think it illustrates well how difficult it is to write a paging / caching system that does the right thing most of the time.
He is not talking about disk cache, disk cache is accounted for separately, he's talking about actual memory being allocated by the kernel_task process. It's been obvious since Lion came out that there's a problem, and so far Apple hasn't fixed it.