Given two background commands, terminate the remaining one when either exits
Solution 1
This starts both processes, waits for the first one that finishes and then kills the other:
#!/bin/bash
{ cd ./frontend && gulp serve; } &
{ cd ./backend && gulp serve --verbose; } &
wait -n
pkill -P $$
How it works
-
Start:
{ cd ./frontend && gulp serve; } & { cd ./backend && gulp serve --verbose; } &
The above two commands start both processes in background.
-
Wait
wait -n
This waits for either background job to terminate.
Because of the
-n
option, this requires bash 4.3 or better. -
Kill
pkill -P $$
This kills any job for which the current process is the parent. In other words, this kills any background process that is still running.
If your system does not have
pkill
, try replacing this line with:kill 0
which also kills the current process group.
Easily testable example
By changing the script, we can test it even without gulp
installed:
$ cat script.sh
#!/bin/bash
{ sleep $1; echo one; } &
{ sleep $2; echo two; } &
wait -n
pkill -P $$
echo done
The above script can be run as bash script.sh 1 3
and the first process terminates first. Alternatively, one can run it as bash script.sh 3 1
and the second process will terminate first. In either case, one can see that this operates as desired.
Solution 2
For completeness here is what I ended up using:
#!/bin/bash
(cd frontend && gulp serve) &
(cd backend && gulp serve --verbose) &
wait -n
kill 0
This works for me on Git for Windows 2.5.3 64-bit. Older versions may not accept the -n
option on wait
.
Solution 3
This is tricky. Here’s what I devised; it may be possible to simplify/streamline it:
#!/bin/sh
pid1file=$(mktemp)
pid2file=$(mktemp)
stat1file=$(mktemp)
stat2file=$(mktemp)
while true; do sleep 42; done &
main_sleeper=$!
(cd frontend && gulp serve & echo "$!" > "$pid1file";
wait "$!" 2> /dev/null; echo "$?" > "$stat1file"; kill "$main_sleeper" 2> /dev/null) &
(cd backend && gulp serve --verbose & echo "$!" > "$pid2file";
wait "$!" 2> /dev/null; echo "$?" > "$stat2file"; kill "$main_sleeper" 2> /dev/null) &
sleep 1
wait "$main_sleeper" 2> /dev/null
if stat1=$(<"$stat1file") && [ "$stat1" != "" ] && [ "$stat1" != 0 ]
then
echo "First process failed ..."
if pid2=$(<"$pid2file") && [ "$pid2" != "" ]
then
echo "... killing second process."
kill "$pid2" 2> /dev/null
fi
fi
if [ "$stat1" = "" ] && \
stat2=$(<"$stat2file") && [ "$stat2" != "" ] && [ "$stat2" != 0 ]
then
echo "Second process failed ..."
if pid1=$(<"$pid1file") && [ "$pid1" != "" ]
then
echo "... killing first process."
kill "$pid1" 2> /dev/null
fi
fi
wait
if stat1=$(<"$stat1file")
then
echo "Process 1 terminated with status $stat1."
else
echo "Problem getting status of process 1."
fi
if stat2=$(<"$stat2file")
then
echo "Process 2 terminated with status $stat2."
else
echo "Problem getting status of process 2."
fi
- First, start a process (
while true; do sleep 42; done &
) that sleeps/pauses forever. If you’re sure that your two commands will terminate within a certain amount of time (e.g., an hour), you can change this to a single sleep that will exceed that (e.g.,sleep 3600
). You could then change the following logic to use this as a timeout; i.e., kill the processes if they’re still running after that much time. (Note that the above script currently does not do that.) - Start the two asynchronous (concurrent background) processes.
- You don’t need
./
forcd
. -
command & echo "$!" > somewhere; wait "$!"
is a tricky construct that starts a process asynchronously, captures its PID, and then waits for it; making it sort-of a foreground (synchronous) process. But this happens within a(…)
list which is in the background in its entirety, so thegulp
processes do run asynchronously. - After either of the
gulp
processes exits, write its status to a temporary file and kill the “forever sleep” process.
- You don’t need
-
sleep 1
to protect against a race condition where the first background process dies before the second one gets a chance to write its PID to the file. - Wait for the “forever sleep” process to terminate.
This happens after either of the
gulp
processes exits, as stated above. - See which background process terminated. If it failed, kill the other.
- If one process failed and we killed the other, wait for the second one to wrap up and save its status to a file. If the first process finished successfully, wait for the second one to finish.
- Check the statuses of the two processes.
Solution 4
On my system (Centos), wait
doesn't have -n
so I did this:
{ sleep 3; echo one; } &
FOO=$!
{ sleep 6; echo two; } &
wait $FOO
pkill -P $$
This doesn't wait for "either", rather waits for the first one. But still it can help if you know which server will be stopped first.
Related videos on Youtube
Criminal_Jelly
Updated on September 18, 2022Comments
-
Criminal_Jelly almost 2 years
I have a simple bash script that starts two servers:
#!/bin/bash (cd ./frontend && gulp serve) & (cd ./backend && gulp serve --verbose)
If the second command exits, it seems that the first command continues running.
How can I change this so that if either command exits, the other is terminated?
Note that we don't need to check the error levels of the background processes, just whether they have exited.
-
Admin almost 9 yearsWhy not
gulp ./fronend/serve && gulp ./backend/serve --verbose
? -
Admin almost 9 years
serve
is an argument, not a file, so the current directory needs to be set. -
Admin almost 9 yearsAlso these are long-running processes that need to run concurrently, sorry if that was not clear.
-
-
Criminal_Jelly almost 9 yearsThis looks great. Unfortunately those commands don't work on my bash environment (msysgit). That's my bad for not specifying that. I will try it out on a real Linux box though.
-
G-Man Says 'Reinstate Monica' almost 9 years(1) Not all versions of
bash
support the-n
option to thewait
command. (2) I agree 100% with the first sentence — your solution starts two processes, waits for the first one to finish and then kills the other. But the question says “…if either command errors out, the other is terminated?” I believe that your solution is not what the OP wants. (3) Why did you change(…) &
to{ …; } &
? The&
forces the list (group command) to run in a subshell anyway. IMHO, you’ve added characters and possibly introduced confusion (I had to look at it twice to understand it) with no benefit. -
John1024 almost 9 yearsYes,
wait -n
requires bash 4.3 or better. My understanding is thatgulp
is a web server and, in practice, it terminates itself only when it errors out. If the OP's expectations are different, he can clarify. -
Criminal_Jelly almost 9 yearsJohn is correct, they are web servers that should normally both keep running unless an error occurs or they are signaled to terminate. So I don't think we need to check the error level of each process, just whether it is still running.
-
Criminal_Jelly almost 9 years
pkill
is not available for me butkill 0
seems to have the same effect. Also I updated my Git for Windows environment and it looks likewait -n
works now, so I'm accepting this answer. -
John1024 almost 9 yearsInteresting. Although I see that that behavior is documented for some versions of
kill
. the documentation on my system does not mention it. However,kill 0
works anyway. Good find! -
Kusalananda about 6 yearsWhether
wait
has the-n
option or not depends on the shell you are using, not the Linux distribution.