Why can't I crash my system with a fork bomb?
Solution 1
You probably have a Linux distro that uses systemd.
Systemd creates a cgroup for each user, and all processes of a user belong to the same cgroup.
Cgroups is a Linux mechanism to set limits on system resources like max number of processes, CPU cycles, RAM usage, etc. This is a different, more modern, layer of resource limiting than ulimit
(which uses the getrlimit()
syscall).
If you run systemctl status user-<uid>.slice
(which represents the user's cgroup), you can see the current and maximum number of tasks (processes and threads) that is allowed within that cgroup.
$ systemctl status user-$UID.slice ● user-22001.slice - User Slice of UID 22001 Loaded: loaded Drop-In: /usr/lib/systemd/system/user-.slice.d └─10-defaults.conf Active: active since Mon 2018-09-10 17:36:35 EEST; 1 weeks 3 days ago Tasks: 17 (limit: 10267) Memory: 616.7M
By default, the maximum number of tasks that systemd will allow for each user is 33% of the "system-wide maximum" (sysctl kernel.threads-max
); this usually amounts to ~10,000 tasks. If you want to change this limit:
-
In systemd v239 and later, the user default is set via TasksMax= in:
/usr/lib/systemd/system/user-.slice.d/10-defaults.conf
To adjust the limit for a specific user (which will be applied immediately as well as stored in /etc/systemd/system.control), run:
systemctl [--runtime] set-property user-<uid>.slice TasksMax=<value>
The usual mechanisms of overriding a unit's settings (such as
systemctl edit
) can be used here as well, but they will require a reboot. For example, if you want to change the limit for every user, you could create/etc/systemd/system/user-.slice.d/15-limits.conf
. In systemd v238 and earlier, the user default is set via UserTasksMax= in
/etc/systemd/logind.conf
. Changing the value generally requires a reboot.
More info about this:
- man 5 systemd.resource-control
- man 5 systemd.slice
- man 5 logind.conf
- http://0pointer.de/blog/projects/systemd.html (search this page for cgroups)
- man 7 cgroups and https://www.kernel.org/doc/Documentation/cgroup-v1/pids.txt
- https://en.wikipedia.org/wiki/Cgroups
Solution 2
This won't crash modern Linux systems anymore anyway.
It creates hoards of processes but doesn't really burn all that much CPU as the processes go idle. You run out of slots in the process table before running out of RAM now.
If you're not cgroup limited as Hkoof points out, the following alteration still brings systems down:
:(){ : | :& : | :& }; :
Solution 3
Back in the 90's I accidentally unleashed one of these on myself. I had inadvertently set the execute bit on a C source file that had a fork() command in it. When I double-clicked it, csh tried to run it rather than open it in an editor like I wanted.
Even then, it didn't crash the system. Unix is robust enough that your account and/or the OS will have a process limit. What happens instead is it gets super sluggish, and anything that needs to start a process is likely to fail.
What's happening behind the scenes is that the process table fills up with processes that are trying to create new processes. If one of them terminates (either due to getting an error on the fork because the process table is full, or due to a desperate operator trying to restore sanity to their system), one of the other processes will merrily fork a new one to fill the void.
The "fork bomb" is basically an unintentionally self-repairing system of processes on a mission to keep your process table full. The only way to stop it is to somehow kill them all at once.
Related videos on Youtube
Plancton
Updated on September 18, 2022Comments
-
Plancton over 1 year
Recently I've been digging up information about processes in GNU/Linux and I met the infamous fork bomb :
:(){ : | :& }; :
Theoretically, it is supposed to duplicate itself infinitely until the system runs out of resources...
However, I've tried testing both on a CLI Debian and a GUI Mint distro, and it doesn't seem to impact much the system. Yes there are tons of processes that are created, and after a while I read in console messages like :
bash: fork: Resource temporarily unavailable
bash: fork: retry: No child processes
But after some time, all the processes just get killed and everything goes back to normal. I've read that the ulimit set a maximum amount of process per user, but I can't seem to be able to raise it really far.
What are the system protections against a fork-bomb? Why doesn't it replicate itself until everything freezes or at least lags a lot? Is there a way to really crash a system with a fork bomb?
-
Hugo over 5 yearsNote that you won’t “crash” your system using a fork bomb... as you said, you’ll exhaust resources and be unable to spawn new processes but the system shouldn’t crash
-
mtraceur over 5 yearsWhat happens if you run
:(){ :& :; }; :
instead? Do they also all end up getting killed eventually? What about:(){ while :& do :& done; }; :
? -
ron over 2 years
ulimit -u unlimited
would be the command line method to setmax user processes
tounlimited
however that would be overridden i believe by thehard
limit in/etc/security/limits.conf
-
ron over 2 yearsso if you were to edit
/etc/security/limits.conf
and set both the hard and soft limits tounlimited
fornproc
I believe that would undo the protection mechanism and allow your fork bomb to blow up (i.e. really crash) your system.
-
-
Austin Hemmelgarn over 5 yearsThis really depends on what you consider 'crashing' the system. Running out of slots in the process table will bring a system to it's knees in most cases, even if it doesn't completely cause a kernel panic.
-
mtraceur over 5 yearsWhy would the processes go "idle"? Each forked process is in an infinite recursion of creating more processes. So it spends a lot of time in system call overhead (
fork
over and over), and the rest of its time doing the function call (incrementally using more memory for each call in the shell's call stack, presumably). -
Joshua over 5 years@mtraceur: It only happens when forking starts failing.
-
mtraceur over 5 yearsOh, I take it back. I was modeling the logic of a slightly different fork bomb implementation in my head (like this:
:(){ :& :; }; :
) instead of the one in the question. I haven't actually fully thought through the execution flow of the archetypical one as given. -
Austin Hemmelgarn over 5 years@Joshua Except, last I checked, Linux doesn't.
-
rackandboneman over 5 yearsOK, one way to build a more effective one: Make a fork bomb of processes that try to do something to a file on a hung hard NFS mount. Start it, let all these processes get D-stated. Fix the cause of the nfs mount being hung. Run.
-
Joshua over 5 years@rackandboneman: Yeah kinda. Anyway all I gotta do to fix this one good is ::(){ while :; do; ::&; done }; :: I've heard horror stories about this kind of thing managing to survive a bot trying to kill it from several priorities higher.
-
rackandboneman over 5 yearsI remember the nfs thing so vividly because it once ended me up with a load average around 900 on a single core (2.2.x or 2.4.x kernel, not sure) system....
-
Mast over 5 yearsAnd 12288 processes (minus what was already spawned before the bomb) doing nothing except trying to create a new one, doesn't really impact a modern system.
-
Aaron over 5 yearsFor what it's worth I tried it under Cygwin on Windows 10 and it did bring my system to its knees ; I had to do an hard shutdown to regain control of my system.
-
Score_Under over 5 yearsKilling them all at once is easier than you think - SIGSTOP them all first.
-
T.E.D. over 5 years@Score_Under - I hope you'll forgive me if I don't immediately rush off to my nearest Harris Nighthawk to see if that would have fixed the problem there. I'm thinking just getting a PID an sending it the signal before it dies from the failed fork and another takes it place might be a challenge, but I'd have to try it out.
-
Andreas Krey over 5 years@T.E.D. kill -9 -1 may be you friend here (with the same user that runs the fork bomb; not with root).
-
T.E.D. over 5 years@AndreasKrey - That flag doesn't look familiar, so I'm doubting my 90's era Nighthawk had it.
-
Joshua over 5 years@Aaron: Ah; in Cygwin it's a memory bomb because
fork()
copies all the process memory immediately and bash is kinda big. -
Joshua about 5 years@T.E.D.:
-1
isn't a flag.kill
only takes one option then stops parsing options. This kills process id-1
, which is an alias for all processes.