Is there a way to limit buffer cache size in linux?

25,673

As you ask about program, you could use cgroups :

Create a cgroup named like group1 with a memory limit (of 50GB, for example, other limits like CPU are supported, in example CPU is also mentioned):

cgcreate -g memory,cpu:group1

cgset -r memory.limit_in_bytes=$((50*1024*1024*1024)) group1

Then, if you app is already running, bring app into this cgroup:

cgclassify -g memory,cpu:group1 $(pidof your_app_name)

Or execute your app within this cgroup:

cgexec -g memory,cpu:group1 your_app_name
Share:
25,673

Related videos on Youtube

RoughTomato
Author by

RoughTomato

Interested in computer science and electronics since I was a kid. Programming is my definition of fun. Nothing gives me more satisfaction than fixing broken functionality that haven't been working correctly for quite some time.

Updated on September 18, 2022

Comments

  • RoughTomato
    RoughTomato over 1 year

    I've got program consuming a lot of memory during I/O operations, which is a side effect of performing loads of them. When I run program using direct_io the memory issue is gone but time it takes for program to finish the job is four times grater.

    Is there any way to reduce buffer cache (kernel buffer used during I/O operations) maximum size? Preferably without changing kernel sources.

    I've tried reducing /proc/sys/vm/ dirty_bytes etc. But that doesn't seem to be making any noticeable difference.

    UPDATE: using echo 1 > /proc/sys/vm/drop_caches echo 2 > /proc/sys/vm/drop_caches echo 3 > /proc/sys/vm/drop_caches During program runtime temporarily reduces amount of used memory.

    Can I somehow limit pagecache, dentries and inodes instead of constantly freeing them? That would probably solve my problem.

    Haven't noticed that before but problem occurs with every I/O operation, not just partitioning. It looks like linux is caching everything going through I/O up to a certain point when he reaches almost maximum memory available, leaving 4 MB free memory. So there's some sort of upper limit for how much memory can be cached for I/O. But I'm unable to find where it is. Getting kind off desperate. If I can't divide it by 2 somewhere in kernel sources I'll gladly do so.

    12-12-2016 Update: I've gave up on fixing that but something caught my attention and reminded me of this problem. I have old failing hdd at home and it wastes resources like crazy when I try to do something with it.

    Is it possible that might be the case of just failing HDD? HDD in question, died within month from when my problem occurred. If this is the case I've got my answer.

    • David Schwartz
      David Schwartz almost 8 years
      What is the actual problem you're trying to solve? Why would you prefer to have memory wasted rather than being used to cache data that might be needed later?
    • RoughTomato
      RoughTomato almost 8 years
      I'm running certain program on embedded platform while system performing different operations in a background. Say my platform is 256 MB, program uses 100 MB and I'm in danger of running OOM. I thought there is some mechanism that doesn't allow allocating new buffers if there's chance of running OOM. Recent system failures ensured me that such mechanism either doesn't exist or isn't turned on.
    • RoughTomato
      RoughTomato almost 8 years
      If I can limit memory usage say 10 minutes and 50-60% less memory I would be satisfied. direct_io takes 20 minutes and 80-90% less memory. Unfortunately I can't whisker out 20 minutes for those operations, 10 is more acceptable in my case.
    • David Schwartz
      David Schwartz almost 8 years
      It sounds like your question is based on misconceptions about how memory works on modern operating systems. What you're asking how to do is what the OS already does -- when it needs memory for other things, it reduces the buffer cache size.
    • RoughTomato
      RoughTomato almost 8 years
      I do realize that, but my OOM's caused by buffer cache during I/O operations suggest either that it doesn't work or something is broken. It might be latter considering that I'm running custom kernel that isn't up to date with recent changes on embedded device. I thought it might be possible to use something like direct_io that basically is direct_io with buffer. But that's probably because I don't really understand what's going on in the kernel during those operations even though I've read sources.
    • RoughTomato
      RoughTomato almost 8 years
      Another case might be that I don't understand memory allocation very well since recently only thing I had to care about was about freeing my mallocs properly. Now I got problem that I don't understand very well. During creation of filesystem on hdd kernel is allocating loads of memory if drive is bit enough it causes OOM. Droping buffers helps if I do it often enough, so naturally my thought was. "Hey can I limit it somehow so it won't kill my device?". But after reading kernel sources I'm scared that problem might be way beyond my ability to fix.
    • David Schwartz
      David Schwartz almost 8 years
      Tell us more about the type of I/O operations you're doing. Are they writes to a hard drive? The solution is likely to minimize the amount of unwritten data waiting to be written by throttling the writing process as needed. A quick fix might be to limit the number of dirty pages. See here.
    • RoughTomato
      RoughTomato almost 8 years
      Writing inodes to the drive, during creation of ext4 filesystem. Been on that website, tried that with no positive effect. While configuring vm to limit dirty pages and drop them as soon as possible I didn't really achieved any change at all (which seems really strange to me since doing other way around cause more memory usage and OOM with smaller drives.).
    • admirabilis
      admirabilis over 5 years
      This was unfairly downvoted. Most users blindly assume that buffer caches using 100% of the RAM is always a good thing. To anyone having the issue where the system becomes completely unresponsive because freeing the caches for other applications takes a frustrating amount of time, I recommend the first answer on unix.stackexchange.com/questions/253816 and staged-gluster-docs.readthedocs.io/en/release3.7.0beta1/… . There were a couple patches proposed over the years, but none of them seem to have ever made it upstream.
    • RoughTomato
      RoughTomato over 5 years
      Thank you! Shame I no longer have means of reproducing this issue, but I'm saving this for the future. Chances are I'll run into it again at some point in life
  • RoughTomato
    RoughTomato about 6 years
    I think I've tried that as well (but that was ages ago so my memory is a bit hazy) I think somehow kernel was ignoring my cgroups setup although I no longer have means of recreating the problem to confirm it. Device in question is no longer in my possession, thanks anyway :)
  • Peng Qu
    Peng Qu over 2 years
    I think it won't help because page cache is managed by OS, not one process. cgroup can only limit a process's page usage, which doens't include cached disk pages.
  • Peng Qu
    Peng Qu over 2 years
    Sorry I make a mistake. memory.liimt_in_bytes can't do this but memory.memsw.limit_in_bytes does this.