What's with these sysctl defaults?

Discussion in 'all things UNIX' started by Gullible Jones, Nov 1, 2012.

Thread Status:
Not open for further replies.
  1. On all Linux distros currently available, the sysctl variables vm.dirty_background_ratio and vm.dirty_ratio are set to 5 and 10 respectively, or thereabouts. These variables indicate the amount of RAM that can be occupied by stuff that needs writing to the disk, before said stuff must be flushed - asyncronously for dirty_background_ratio, synchronously for dirty_ratio.

    Mind, those defaults are for when your computer is plugged in. For laptops on battery power, those variables are set by pm-utils to 40 and 60.

    My question is, why on Earth would anyone set them so high? If you have 1+ GB of RAM, then when you hit that 5% limit, your laptop or desktop computer will freeze up. Decompress a big tarball? Copy a few GB from an external drive? Bam, freeze. I don't even want to think about what would happen with dirty_background_ratio at 40%. Sure, increasing dirty_background_ratio will delay the write, but eventually it will have to happen, and when it does, watch out.

    (Also, there's the obvious risk of data loss when significantly delaying writes...)

    In my experience the default settings tend to result in huge slowdowns when e.g. installing stuff, whereas vm.dirty_background_ratio=1 and vm.dirty_ratio=2 result in much more reasonable desktop performance, and only a small decrease in throughput. So... Whence the defaults?
     
  2. tancrackers

    tancrackers Registered Member

    Joined:
    May 22, 2012
    Posts:
    18
    Location:
    USA
    Systemd is still relatively new to the stable, Linux world. Most distros haven't quite tweaked everything to their liking.
     
  3. NGRhodes

    NGRhodes Registered Member

    Joined:
    Jun 23, 2003
    Posts:
    2,381
    Location:
    West Yorkshire, UK
    If the value is too low it can cause too much disk I/O, rewritting data to disk that could of be disguarded or replaced (e.g. temp data, frequently modified data).

    Too high and you can suffer I/O saturation (like the example you gave).

    Have to say not something I have tuned (like swapiness) because I've never suffered any repeated/specific slowdowns.

    Tancrackers, what has Systemd go to do with the system settings (which have been round and used since before Systemd) ?

    Cheers, Nick
     
  4. Makes sense. I haven't seen that yet though (with dirty_ratio at 2 and dirty_background_ratio at 1, and 1 GB of RAM).

    I find that laptops get very unresponsive during big writes, unless those variables are tuned. Makes sense I guess, since laptops (my laptops anyway) have a lot of RAM, but slow CPUs and slow hard drives.

    I think he's confusing sysctl and systemctl. And I must admit to some annoyance at systemctl's name...
     
  5. Mrkvonic

    Mrkvonic Linux Systems Expert

    Joined:
    May 9, 2005
    Posts:
    10,223
    There are other parameters, like centisecs. The lowest applies. And if you read my article on system tweaking, I did mention this specific parameter, and how there's no golden rule on the best value.

    http://www.dedoimedo.com/computers/linux-cool-hacks-3.html

    -----

    If however, you feel really adventurous, you might want to explore the kernel tunables under /proc/sys/vm. There are several of those.

    The swappiness parameter tells you how aggressively your system will try to swap pages. The values range from 0 to 100. In most cases, your disk will always be the bottleneck, so it will make little difference. Then, there's the dirty_ratio tunable, which tells the percentage of total system memory that can be taken by dirty pages. Once this limit is hit, the system will start flushing data to the disk. Another parameter that is closely related to the dirty_ratio is dirty_expire_centisecs, which determines the max. age of dirty pages before they are flushed. The system will commit the dirty data based on the first of the two parameters to be met, which will most likely be the expire time.

    A mental exercise: the default dirty_ratio on Linux is 40%, while the default expire tunable is set to 3000 centiseconds. A centisecond is 1/100 of a second or 10ms, so we have 30 seconds total. If you have a machine with 4GB RAM, then 1.6GB will be dedicated to dirty pages at most. Now, this means that whatever you're writing, it needs to create some 55MB of data every second to exceed this threshold in the thirty-second period for the kernel flushing thread to wake and start writing to the disk. In most cases, you will rarely have such aggressive writes. Some notable examples include large copies, video rendering and alike. In daily use, hardly ever. If you have more than 4GB RAM, say 8-16GB, then this becomes even less likely.

    This exercise also tells you whether you really need that high dirty_ratio, how to set the other tunables and more. Having too many dirty pages also means very long and sustained writes when the time comes to commit them to disk. Food for thought, fellas. There's no golden rule.

    -----

    Regards,
    Mrk
     
Thread Status:
Not open for further replies.
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.