Bash running multiple program and handling close
Solution 1
Collect the process ID's, kill the background processes on exit.
#!/bin/bash
killbg() {
for p in "${pids[@]}" ; do
kill "$p";
done
}
trap killbg EXIT
pids=()
background job 1 &
pids+=($!)
background job 2... &
pids+=($!)
foreground job
Trapping EXIT
runs the function when the shell exits, regardless of the reason. You could change that to trap killbg SIGINT
to only run it on ^C
.
This doesn't check if one of the background processes exited before the script tries to shoot them. If they do, you could get errors, or worse, shoot the wrong process.
Or kill them by job id. Let's read the output of jobs
to find out which ones are still active.
#!/bin/bash
killjobs() {
for x in $(jobs | awk -F '[][]' '{print $2}' ) ; do
kill %$x
done
}
trap killjobs EXIT
sleep 999 &
sleep 1 &
sleep 999 &
sleep 30
If you run background processes that spawn other processes (like a subshell: (sleep 1234 ; echo foo) &
), you need to enable job control with set -m
("monitor mode") for this to work. Otherwise just the lead process is terminated.
Solution 2
I was just reading similar questions about collecting PID's and then killing them all at the end of a script - the problem is that a PID for a finished process could get recycled and re-used in a new process before your script finishes, and then you could kill a new (random) process.
Trap EXIT and kill with bash's job control
You could use bash's job control to only kill processes started in the script with a trap and %n jobspec, counting the max number of jobs that could be running (only 3 in this example):
#!/bin/bash
#trap 'kill %1 %2 %3' 0 # 0 or EXIT are equivalent
#trap 'kill %1 %2 %3' EXIT # or use {1..n} as below
trap 'kill %{1..3}' EXIT
sleep 33 &
sleep 33 &
sleep 33 &
echo processes are running, ctrl-c the next sleep
sleep 66
echo this line will never be executed
Any extra "kills" to non-existent jobspec's that have already finished only results in an error message, it won't kill any other new/random processes.
kill your script's complete process group
Here's a slightly different way way to kill your script's complete process group. But if your script/shell's job control isn't set up then it could inherit it's PPID from it's parent... but without job control the above wouldn't work either.
The difference is this kill command for trap, using bash's PID, since it becomes the PGID for new processes:
trap 'kill -- -$$' EXIT
See this related Q or here where Johannes 'fish' Ziemke traps SIGINT and SIGTERM and uses setsid
to "kill the complete process group in a new process group so we won't risk killing our selves.")
Solution 3
If you want to kill all background processes if they do not complete before lastProgram
has finished executing, you will need to store all of the pids:
python program1.py &
p1pid=$!
python program2.py &
p2pid=$!
other programs ... &
p3pid=$!
lastProgram
kill $p1pid $p2pid $p3pid
If you simply want to wait until all background processes have finished executing before exiting the script, you can use the wait
command.
python program1.py &
python program2.py &
other programs ... &
lastProgram
wait
Solution 4
Here is my one line command (in bash
):
trap "jobs -p | xargs kill ; trap - INT" INT ; cmd1 & cmd2 & cmd3 &
# replace `cmds` with your command
What it does is trapping Ctrl+C to kill all background jobs and revert back to its default behaviour.
Then, it runs all the commands in background.
When you hit Ctrl+C, it will kill them all and reset trap back to its default behaviour.
Related videos on Youtube
Ricard Molins
Embedded SW developer at IDNEO. Currently Working with C, python Yocto project Linux CAN, LIN, UART, Ethernet, Wifi, GNSS, Telecom (2G,3G,4G) With interests in: Matlab simulink Model based design
Updated on September 18, 2022Comments
-
Ricard Molins almost 2 years
I have a bash script that runs multiple programs
#!/bin/sh python program1.py & python program2.py & other programs ... & lastProgram
I run it as
./myscript.sh
When I hit Ctrl+C to close the
lastProgram
it exit and all the other ones keep running in the background. The problem is that the other programs need to be terminated.Which is the proper way to handle the closing of all the programs started from the script?
-
Atul Vekariya about 7 yearsYou run the programs in background so you should handle the pids. Also you should handle the Ctrl+C and terminate the pids you collect before
-
-
ilkkachu about 7 yearsGood point about avoiding shooting recycled PIDs. I thought about killing by jobid, but didn't come up with a good way to collect them.
-
Govind about 7 years@ilkkachu I did find another way to kill the script's whole process group, it doesn't feel as "safe" as the jobspec method though. I considered parsing
jobs
too, then realized it doesn't matter if you try to kill already finished jobs, aside from an error message that can be2>/dev/null
'd, if it's even noticed. They always start counting from one and have a maximum, so might as well just kill'em all (shoot first, no questions ;-) -
ilkkachu about 7 yearsah, right, for a noninteractive shell (without job control) that works, though it does kill the shell itself too which may be annoying if you want to do something else after that.
jobs
and the%1
jobid's do work on noninteractive shells too, it just doesn't put background jobs in separate process groups. -
Ricard Molins about 7 yearsYour answer was extremely complete and didactic. I can now can run my project after closing it without having to check if any port "bind" was left behind. Thanks for your effort
-
Ricard Molins almost 4 yearsIf program1.py has finished the process PID may be reused by another starting program. Therefore the kill $p1pid may kill a random process. Is this correct or there is any mechanism that avoids this?