Hacker News new | past | comments | ask | show | jobs | submit login
How to fill 90% of free memory? (unix.stackexchange.com)
58 points by yiedyie on Nov 8, 2013 | hide | past | favorite | 48 comments



My first "tech job" was doing support in the late 90s for DOS games at EA. At that point most folks had computers with 8 or 16MB of RAM. One of the old Jane's Flight simulator games had a check for minimum RAM requirements which for some reason would overflow at 128MB and say you didn't meet minimum memory requirements. So these folks would spend around $1000 on memory to build premium machines with 128MB and then get these old games which would say they couldn't play the game due to insufficient memory. The fix was to create a RAM drive which would caption of part of the RAM available to storage and leave a limited amount of available RAM to run the game which would clear the memory validation.

I believe at one point someone was able to actually install a game into the RAM Drive and play it on the left over available RAM but it required a game that could be installed and played without a reboot.

http://en.wikipedia.org/wiki/RAM_drive


Hah! That even happens with fairly modern games, like the PC port of Grand Theft Auto IV. The game could not fathom that I have 3GB of VRAM. You would think game developers might have wised up by now and added a few extra bits to whichever field stores that number, for futureproofing purposes.


I've had a similar issue with an old game, I think it was "Broken Sword", when I tried to install it on a newer machine except the problem was not RAM but hard disk space. The setup was attempting to find out how much space was left on the disk before install but the counter wrapped around since I had several hundreds of GB free on my disk which was probably an insanely large amount in 1996 when the game was released.

Anyway, my "clever" solution back then had been to fill up my disk enough before the install so that the setup would detect more free space.


Dick move to refuse running the game anyway, just show a notice and have a "Continue" button.


Well, maybe a message more along the lines of "Hey, this game isn't built to run on so little RAM, you're going to have a terrible gaming experience, so don't you dare blame us and tell people our game sucks."


Infinite loop: HN links to this question, StackExchange links to HN for an answer.


It's almost as fun when you ask a question on a forum and maybe figure it out or not but months or years later run into the same problem Google it and find your own question and answer as a top search result.


Take that, Google crawler bot.


I guess is not the firstime.


Just fill /dev/shm via dd or similar.

    dd if=/dev/zero of=/dev/shm/fill bs=1k count=1024k

(option #2 is to limit the amount of ram available to the kernel via grub.conf, see my comment below)


But would that really help for "low resource" testing, since OS will just swap out that unused filled zeroes to clean up some space for actually running programs?

Maybe creating a VM with just as much memory as the target system would be a better solution to get more predictable results?


There is a way from grub to limit the amount of memory the kernel boots with.

Probably easiest way if you want low memory without fiddling.

    mem=1G

    mem=512M
etc.


that reminds me of that day when I set

    mem = 512
before rebooting. Ever since then, I know why my primary school teacher always got pissed when we left off the units :-)

On a related note: Yes. The kernel needs more than 512 bytes of RAM to boot - even back in the 2.2 days (when this took place)


I wouldn't bother with a full blown VM -- LXC might be relevant.


Can't shm end up in swap however? If so it might be necessary to make sure there's no active swap partition otherwise all this unused allocated memory will end up in it pretty quickly.


You can temporarily disable swap

     swapoff -a


At least in my case this is the answer


I think this is the best solution, with swapoff -a of course. If you use programs that keep memory allocated, the modern kernels will kill them at some point, before chocking. You can/should then disable that though, see some good article here: http://www.oracle.com/technetwork/articles/servers-storage-d...


I recently recorded a screencast [1], about linux cgroups, and how you can restrict/shape various program resources. For one of the examples, I wrote the following C program to take 50MB of memory, at around the 15:45 mark in the screencast you can see it in action, but this could easy be modified, and add a sleep, to hold the memory for a while. You would most likely need to disable swap if you wanted to use 90% of the free memory, run "free -m; swapoff -a; free -m" as root, then use "swapon -a" to enable it again.

  #include <stdio.h>
  #include <stdlib.h>
  #include <string.h>
  
  int main(void) {

      int i;
      char *p;

      /* intro message */
      printf("Starting ...\n");

      /* loop 50 times, try and consume 50 MB of memory */
      for (i = 0; i < 50; ++i) {

          /* failure to allocate memory? */
          if ((p = malloc(1<<20)) == NULL) {
              printf("Malloc failed at %d MB\n", i);
              return 0;
          }

          /* take memory and tell user where we are at */
          memset(p, 0, (1<<20));
          printf("Allocated %d to %d MB\n", i, i+1);

      }

      /* exit message and return */
      printf("Done!\n");
      return 0;

  }
[1] http://sysadmincasts.com/episodes/14-introduction-to-linux-c...


Or you could use 'stress':

http://linux.die.net/man/1/stress


Just run several instances of Firefox for a couple of days non stop


I thought that was Chrome. I've had Chrome consume 512MB per tab for 15+ tabs.

I have 32GB ram so I can cope, but calling out Firefox as the worst offender in class is a wee bit rich.


They're both ridiculous. I know browsers are really feature-intensive nowadays, but really, 500mb for a few tabs?

I think the story is that Firefox uses less, but because Chrome has an individual process for each tab they don't all die at once, and Chrome has a nicer about:memory page.

I've tried using lighter browsers like Midori but can't get away from having 10+ extensions for various things.


Don't get me wrong, I use Firefox and I love it, it's my main browser. It was just a simple joke with no malicious intents.

That said and done, I've had firefox run for several days on my computer and reach up to 3-4GB of memory used (I have 8GB). At that point it becomes very clunky and requires a restart.


Doesn't work here; three months and counting.


honestly? virt-manager.py has some bug I've been looking at for a while.

if I leave it on over the weekend hooked up to a few of my vm hosts, then I've got about 9gb ram used.


I've got around 200 tabs open and I don't close Firefox in months. It's only using around 1GB of RAM at the moment (rarely more). I'm on Firefox 24.0 on Gentoo Linux and have a great many add-ons installed. It's possible that my addon to unload unused tabs from memory is helping me a lot here. My addons list: https://imageshack.us/a/img819/2726/7gdd.png


I hope you can find it... that issue has been driving me nuts for a few months now.

(Granted, I haven't noticed it happening quite as much with the latest version in Fedora 19, maybe some of the issues have been fixed?)


...or any Java program.


I'm genuinely curious, why is this being posted to HN? This seems like something any good systems programmer (i.e. C on UNIX) would know and I'm sure there are plenty of people like that on StackExchange.


Because there are a lot of people on Hacker News who aren't good systems programmers.

I imagine most StackExchange posts here are because people might be interested in the answers rather than know them already.


I'm not against it per se, I just thought that's what StackExhange was for. I like the idea that it might have helped contribute a better answer, but I just thought it was really odd to see on HN when I already get alerts from SE on things I may be interested in.


I answered a very similar question on stackoverflow here:

http://stackoverflow.com/q/1229241/25981

In this case the user wanted the program to run out of memory and the best solution I could come up with was to use ulimit, to limit the amount of memory available to the process.


Option 1: tmpfs or another memory backed fs

Option 2: Quick C program, but two gotchas: make sure you touch every page after allocating (and keep touching it, otherwise they will be swapped out)

(Also turning off or limiting swap space may be helpful)


That is why POSIX provides the mlock() and mlockall() system-calls, to prevent memory pages from being swapped.


Right, but Linux won't bind any physical memory page until you actually read or write in it. So if you malloc() and mlock() 1GB of memory without reading/writing in it, that will not use any bit of physical memory.


Will writing just one byte one time suffice? Genuinely unaware and curious.


If you only touch one byte, the system will only allocate one memory page. A memory page is typically 1024 KB so that wouldn't suffice.


On Linux, 4K is still a much more common page size. Most "Huge Pages" ("Large Pages" in Windows speak) are 2 or 4 MB, and have been available since 2.6, but I don't think they are widely used yet. x86_64 also supports 1GB pages, but these are even less frequently used.

http://lwn.net/Articles/374424/


Yes 4K is default page size. Sorry.


Virtualbox running a Windows VM seems pretty good at clinging onto memory and not swapping out. You also get a nice little graphical slider to determine how much memory is allocated to a given VM.


Zeno's little known memory paradox?


Nowadays I would just run Linux in a VirtualBox configured with the amount of RAM that I wanted to simulate. I've done the same thing with CPU cores to compare performance with 1, 2, 4 and 8 cores. Of course I run VirtualBox on a 16-core server...


I'd wonder whether methods that make use of zeroing or /dev/zero'd be staved off for longer on Mavericks - perhaps it'd compress the recurring patterns that'd result?


Would a malloc (alone) work? Doesn't it typically act as if the memory were allocated but not actually use physical RAM until the data is filled?


I'm pretty sure that even if you malloc an enormous amount of memory, it will occupy close to no resources as long as the contents are not touched. Also related: "overcommit memory".


Linux is a bit strange when it comes to memory allocation, see eg:

http://www.win.tue.nl/~aeb/linux/lk/lk-9.html#ss9.6


A java hello world application should do the trick.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: