How to run 'make -j50' without crashing your system

Discussion in 'all things UNIX' started by Gullible Jones, Jan 14, 2014.

Thread Status:
Not open for further replies.
  1. Gullible Jones

    Gullible Jones Registered Member

    Joined:
    May 16, 2013
    Posts:
    1,466
    aka "Linux desktop performance: jaded veteran edition."

    I'm about to compile my pet Angband variant on an EeePC, using 'make -j'. Let's see what happens.

    2014-01-14-183856_1024x600_scrot.png

    2014-01-14-184002_1024x600_scrot.png

    2014-01-14-184050_1024x600_scrot.png

    2014-01-14-184114_1024x600_scrot.png

    2014-01-14-185003_1024x600_scrot.png

    Compiled fine, works fine.

    What am I doing here? Simple, I've disabled swap, just like was telling people not to a couple days ago. :) The only really noteworthy thing is the output of 'ulimit -v' in the first screenshot. That's a cap on virtual memory: it tells the kernel that, if the shell setting it or any children of that shell allocate more than 1 GB of virtual memory, further allocations attempts will fail. So if an application tries to hog too much memory, it will recieve an error, and will at least get a chance of exiting gracefully... In theory at least.

    (But note that this is virtual memory, not RSS - 'ulimit -m' has not worked on Linux since kernel 2.4.30. Virtual memory size can be greater than physical even without swap, since clean pages removed from RAM also count as virtual memory. IOW this might not be completely adequate defense against OOM conditions, depending on what you're doing.)

    Anyway this is obviously not something you should do on production systems, unless you're cool with data loss. But it's worth putting it out there (and showing everyone what my obsolete paperweight computer can do when I pull the stops out).
     
  2. Mrkvonic

    Mrkvonic Linux Systems Expert

    Joined:
    May 9, 2005
    Posts:
    10,224
    Use a 48-core box with hyperthreading and 256gb ram, and you're fine.
    Mrk
     
  3. NGRhodes

    NGRhodes Registered Member

    Joined:
    Jun 23, 2003
    Posts:
    2,381
    Location:
    West Yorkshire, UK
    I assume you know running that many make jobs is pointless :p
    How many of the make threads were actually running at one time (I notice you have 30 or so sleeping threads)?
    Does compilation work with no ulimit set and also with swap enabled and any noticable performance differences ?
     
  4. Gullible Jones

    Gullible Jones Registered Member

    Joined:
    May 16, 2013
    Posts:
    1,466
    Yes I know. :)

    At the peak, 95 threads were running.

    It works without the ulimit - that's just to prevent memory hogs from invoking the OOM killer. It shouldn't have any effect unless something tries to allocate memory beyond the limit.

    And it also works with swap enabled. :oops: Whoops.

    I'd tested 'make -j' with this program before, using swap, and managed to hang my system. But not this time. Maybe something changed in the newer kernels.

    OTOH, desktop interactivity suffers visibly when eating into swap space.

    Also interesting, there is a pattern here:

    - With Openbox and no swap, the system does not slow down noticeably.
    - With Openbox and swap, the system slows down but does not freeze.
    - With Unity 3D and no swap, the system slows down but does not freeze.
    - With Unity 3D and swap, the system freezes up completely for ~30 seconds.

    Maybe the overhead of a compositing window manager is more significant than one would think.
     
  5. Hungry Man

    Hungry Man Registered Member

    Joined:
    May 11, 2011
    Posts:
    9,146
    What does -j even do without a numeric parameter?
     
Thread Status:
Not open for further replies.
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.