How to limit the total resources (memory) of a process and its children
Solution 1
I am not sure if this answers your question, but I found this perl script that claims to do exactly what you are looking for. The script implements its own system for enforcing the limits by waking up and checking the resource usage of the process and its children. It seems to be well documented and explained, and has been updated recently.
As slm said in his comment, cgroups can also be used for this. You might have to install the utilities for managing cgroups, assuming you are on Linux you should look for libcgroups
.
sudo cgcreate -t $USER:$USER -a $USER:$USER -g memory:myGroup
Make sure $USER
is your user.
Your user should then have access to the cgroup memory settings in /sys/fs/cgroup/memory/myGroup
.
You can then set the limit to, lets say 500 MB, by doing this:
echo 500000000 > /sys/fs/cgroup/memory/myGroup/memory.limit_in_bytes
Now lets run Vim:
cgexec -g memory:myGroup vim
The vim process and all its children should now be limited to using 500 MB of RAM. However, I think this limit only applies to RAM and not swap. Once the processes reach the limit they will start swapping. I am not sure if you can get around this, I can not find a way to limit swap usage using cgroups.
Solution 2
https://unix.stackexchange.com/a/536046/4319:
On any systemd-based distro you can also use cgroups indirectly through systemd-run. E.g. for your case of limiting pdftoppm
to 500M of RAM, use:
systemd-run --scope -p MemoryLimit=500M pdftoppm
...
Solution 3
I created a script that does this, using commands from cgroup-tools to run the target process in a cgroup with limited memory. See this answer for details and the script.
Related videos on Youtube
jpe
Updated on September 18, 2022Comments
-
jpe almost 2 years
There are plenty of questions and answers about constraining the resources of a single process, e.g. RLIMIT_AS can be used to constrain the maximum memory allocated by a process that can be seen as VIRT in the likes of
top
. More on the topic e.g. here Is there a way to limit the amount of memory a particular process can use in Unix?setrlimit(2)
documentation says:A child process created via fork(2) inherits its parent's resource limits. Resource limits are preserved across execve(2).
It should be understood in the following way:
If a process has a RLIMIT_AS of e.g. 2GB, then it cannot allocate more memory than 2GB. When it spawns a child, the address space limit of 2GB will be passed on to the child, but counting starts from 0. The 2 processes together can take up to 4GB of memory.
But what would be the useful way to constrain the sum total of memory allocated by a whole tree of processes?
-
slm about 10 yearsReleated: unix.stackexchange.com/questions/1424/…
-
slm about 10 yearsI'd take a look at cgroups.
-
jpe about 10 years@slm Thanks! Sounds like cgroups is something to try. The only working solution this far (besides an ugly way of summing memory using PS and killing the parent process if above limit) that might work is using some form of a container (lxc or the likes).
-
slm about 10 yearsYeah - the tools I'm aware of do not do a group, just single processes, but given how cgroups work for VM technologies like LXC and Docker I'd expect it to do what you want.
-
Gilles 'SO- stop being evil' about 10 yearsUnder which Unix variant?
-
jpe about 10 years@Gilles It would be good to know how to do it in Linux (the environment where I encountered the problem), but answers for OpenSolaris/Illumos, OSX, BSD are welcome too (e.g. in (Open)Solaris/Illumos it should be easy, right?).
-
Gilles 'SO- stop being evil' about 10 years@jpe Given that different unix variants are likely to do this in very different ways, it would be better to have one question per variant.
-
jpe about 10 years@Gilles OK, let the current question be about Linux as the man page excerpt is from Linux.
-
mikeserv about 10 yearsIf it's Linux put the parent PID in its own namespace and control it and all its children that way. Here's an introductory answer to that concept: unix.stackexchange.com/a/124194/52934
-
jpe about 10 years@mikeserv looks like something in the right direction too. But which would be the way that would work in most up-to-date distributions, cgroups or containers/namespaces?
-
mikeserv about 10 yearsnamespaces are containers - just native and handled fully in kernel. And much of the control in control groups is what makes that possible. namespaces finally rolled out production ready circa kernel 3.8. If that last was a small intro - here's the inside out: lwn.net/Articles/531114
-
jpe about 10 years@mikeserv It seems the chat is convergeing to something constructive: namespaces is a solution and probably the solution. What remains to be said is how to use them in a user friendly way that would work across most distros with recent enough kernel.
-
mikeserv about 10 yearsI completely agree - but I doubt very seriously if I can help you much more - I don't have any practical experience with them. I'm kind of hoping you'll dig into that 7 part series at Linux Weekly News and share your own... That's why - for my part at least - this chat is in the comments block of the question and not an answer...
-
Tasos about 10 yearsWhat you are trying to achieve may be impossible and dangerous because you may kill/crash off the process tree anyway as you may run out of your 2gig allocation size. That's why a spawned process is a copy of the parent process.
-
-
jpe about 10 yearsThe proposed solution does make it possible to limit the resident set size of a tree of processes. The behaviour seems to be different from RLIMIT_AS: it is possible to malloc more memory than is the limit, however it seems not to be possible to actually use more.
-
Søren Løvborg over 9 yearsBy default, the cgroup memory limit applies only to (approximately) the physical RAM use. There's a kernel option (CONFIG_MEMCG_SWAP) to enable swap accounting; see the kernel docs for details.
-
jozxyqk over 9 yearsOn fedora,
sudo yum install libcgroup-tools
-
Mikko Rantalainen almost 3 yearsAccording to Poettering (the creator of
systemd
) you should not runcgmanager
in a system that's running withsystemd
(that is, any modern Linux distro). Your disto is supposed to usecgroupv2
and you can runsystemd-run --user -p MemoryMax=42M ...
– however if your system is notcgroupv2
compatible, that command will appear to work but the memory usage is not actually limited in practice. -
Admin about 2 yearsNote that if your OS is running
systemd
(pretty much all Linux distros these days) you're not supposed to usecgmanager
norcgcreate
as far as I know. I think the officially supported systemd way is to usesystemd-run --scope -p MemoryLimit=500M ...
but it has been buggy in many distros so make sure to test if it actually works with your distro. In my experience, some versions will silently fail – they will run the command but will not limit the memory usage.