Hacker News new | past | comments | ask | show | jobs | submit login

In my experience multi-core compilation does not work.

make -j>3 just locks the process and fails.




You just need more ram. I 'ever compile at less than -j$(ncpu). Hard with less than 32 GB tho - a single clang instance can easily eat upwards of 1gb of ram


Aha, I only compile on ARM so I got no room to increase RAM...

Is it the same with g++? I have 4GB so I should be able to compile with 4 cores, but the processes only fill 2-3 cores even when I try make -j8 on a 8 core machine and then locks the entire OS until it craps out?!

Something is fishy...


Why are you compiling on ARM with only 4GB RAM? Wouldn't it make more sense to cross compile from a machine with more resources, if you cared about build speed? (maybe there's a good reason not to do that, idk)

If it's crapping out when you give it -j8, that seems to strongly suggest you're running into limited resources somewhere.

I'm no expert in the intricacies of parallel builds, but as far as I know you can still have dependencies between targets that will limit parallelism.


This is just a low end/uncommon hardware problem. I typically do make -j16 on a 4 core x86 system and it just works. You are probably running out of ram and the swapping resulting in that instability.


Ideally compilation is CPU intensive. Unless you are constrained on IO, it won't improve your compilation much.


A swap file is your friend.


Actually, you win, swap was set to 0 by default! :D

Also I only have 2GB (in 32-bit though)! Xo

High karma HN users are toxic downers.


if you are in 32-bits you could still have trouble in the future though.

source: two days ago I had the exact same problem, so I mounted a 32gb swap file over an NFS drive because my SD card was 2gb (don't ask) and it still failed because ld tried to use more than 4gigs of ram




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: