My first "tech job" was doing support in the late 90s for DOS games at EA. At that point most folks had computers with 8 or 16MB of RAM. One of the old Jane's Flight simulator games had a check for minimum RAM requirements which for some reason would overflow at 128MB and say you didn't meet minimum memory requirements. So these folks would spend around $1000 on memory to build premium machines with 128MB and then get these old games which would say they couldn't play the game due to insufficient memory. The fix was to create a RAM drive which would caption of part of the RAM available to storage and leave a limited amount of available RAM to run the game which would clear the memory validation.
I believe at one point someone was able to actually install a game into the RAM Drive and play it on the left over available RAM but it required a game that could be installed and played without a reboot.
Hah! That even happens with fairly modern games, like the PC port of Grand Theft Auto IV. The game could not fathom that I have 3GB of VRAM. You would think game developers might have wised up by now and added a few extra bits to whichever field stores that number, for futureproofing purposes.
I've had a similar issue with an old game, I think it was "Broken Sword", when I tried to install it on a newer machine except the problem was not RAM but hard disk space. The setup was attempting to find out how much space was left on the disk before install but the counter wrapped around since I had several hundreds of GB free on my disk which was probably an insanely large amount in 1996 when the game was released.
Anyway, my "clever" solution back then had been to fill up my disk enough before the install so that the setup would detect more free space.
Well, maybe a message more along the lines of "Hey, this game isn't built to run on so little RAM, you're going to have a terrible gaming experience, so don't you dare blame us and tell people our game sucks."
It's almost as fun when you ask a question on a forum and maybe figure it out or not but months or years later run into the same problem Google it and find your own question and answer as a top search result.
But would that really help for "low resource" testing, since OS will just swap out that unused filled zeroes to clean up some space for actually running programs?
Maybe creating a VM with just as much memory as the target system would be a better solution to get more predictable results?
Can't shm end up in swap however? If so it might be necessary to make sure there's no active swap partition otherwise all this unused allocated memory will end up in it pretty quickly.
I think this is the best solution, with swapoff -a of course. If you use programs that keep memory allocated, the modern kernels will kill them at some point, before chocking. You can/should then disable that though, see some good article here: http://www.oracle.com/technetwork/articles/servers-storage-d...
I recently recorded a screencast [1], about linux cgroups, and how you can restrict/shape various program resources. For one of the examples, I wrote the following C program to take 50MB of memory, at around the 15:45 mark in the screencast you can see it in action, but this could easy be modified, and add a sleep, to hold the memory for a while. You would most likely need to disable swap if you wanted to use 90% of the free memory, run "free -m; swapoff -a; free -m" as root, then use "swapon -a" to enable it again.
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
int main(void) {
int i;
char *p;
/* intro message */
printf("Starting ...\n");
/* loop 50 times, try and consume 50 MB of memory */
for (i = 0; i < 50; ++i) {
/* failure to allocate memory? */
if ((p = malloc(1<<20)) == NULL) {
printf("Malloc failed at %d MB\n", i);
return 0;
}
/* take memory and tell user where we are at */
memset(p, 0, (1<<20));
printf("Allocated %d to %d MB\n", i, i+1);
}
/* exit message and return */
printf("Done!\n");
return 0;
}
They're both ridiculous. I know browsers are really feature-intensive nowadays, but really, 500mb for a few tabs?
I think the story is that Firefox uses less, but because Chrome has an individual process for each tab they don't all die at once, and Chrome has a nicer about:memory page.
I've tried using lighter browsers like Midori but can't get away from having 10+ extensions for various things.
Don't get me wrong, I use Firefox and I love it, it's my main browser. It was just a simple joke with no malicious intents.
That said and done, I've had firefox run for several days on my computer and reach up to 3-4GB of memory used (I have 8GB). At that point it becomes very clunky and requires a restart.
I've got around 200 tabs open and I don't close Firefox in months. It's only using around 1GB of RAM at the moment (rarely more).
I'm on Firefox 24.0 on Gentoo Linux and have a great many add-ons installed. It's possible that my addon to unload unused tabs from memory is helping me a lot here.
My addons list: https://imageshack.us/a/img819/2726/7gdd.png
I'm genuinely curious, why is this being posted to HN? This seems like something any good systems programmer (i.e. C on UNIX) would know and I'm sure there are plenty of people like that on StackExchange.
I'm not against it per se, I just thought that's what StackExhange was for. I like the idea that it might have helped contribute a better answer, but I just thought it was really odd to see on HN when I already get alerts from SE on things I may be interested in.
In this case the user wanted the program to run out of memory and the best solution I could come up with was to use ulimit, to limit the amount of memory available to the process.
Right, but Linux won't bind any physical memory page until you actually read or write in it. So if you malloc() and mlock() 1GB of memory without reading/writing in it, that will not use any bit of physical memory.
On Linux, 4K is still a much more common page size. Most "Huge Pages" ("Large Pages" in Windows speak) are 2 or 4 MB, and have been available since 2.6, but I don't think they are widely used yet. x86_64 also supports 1GB pages, but these are even less frequently used.
Virtualbox running a Windows VM seems pretty good at clinging onto memory and not swapping out. You also get a nice little graphical slider to determine how much memory is allocated to a given VM.
Nowadays I would just run Linux in a VirtualBox configured with the amount of RAM that I wanted to simulate. I've done the same thing with CPU cores to compare performance with 1, 2, 4 and 8 cores. Of course I run VirtualBox on a 16-core server...
I'd wonder whether methods that make use of zeroing or /dev/zero'd be staved off for longer on Mavericks - perhaps it'd compress the recurring patterns that'd result?
I'm pretty sure that even if you malloc an enormous amount of memory, it will occupy close to no resources as long as the contents are not touched. Also related: "overcommit memory".
I believe at one point someone was able to actually install a game into the RAM Drive and play it on the left over available RAM but it required a game that could be installed and played without a reboot.
http://en.wikipedia.org/wiki/RAM_drive