redirect COPY of stdout to log file from within bash script itself
Solution 1
#!/usr/bin/env bash
# Redirect stdout ( > ) into a named pipe ( >() ) running "tee"
exec > >(tee -i logfile.txt)
# Without this, only stdout would be captured - i.e. your
# log file would not contain any error messages.
# SEE (and upvote) the answer by Adam Spiers, which keeps STDERR
# as a separate stream - I did not want to steal from him by simply
# adding his answer to mine.
exec 2>&1
echo "foo"
echo "bar" >&2
Note that this is bash
, not sh
. If you invoke the script with sh myscript.sh
, you will get an error along the lines of syntax error near unexpected token '>'
.
If you are working with signal traps, you might want to use the tee -i
option to avoid disruption of the output if a signal occurs. (Thanks to JamesThomasMoon1979 for the comment.)
Tools that change their output depending on whether they write to a pipe or a terminal (ls
using colors and columnized output, for example) will detect the above construct as meaning that they output to a pipe.
There are options to enforce the colorizing / columnizing (e.g. ls -C --color=always
). Note that this will result in the color codes being written to the logfile as well, making it less readable.
Solution 2
The accepted answer does not preserve STDERR as a separate file descriptor. That means
./script.sh >/dev/null
will not output bar
to the terminal, only to the logfile, and
./script.sh 2>/dev/null
will output both foo
and bar
to the terminal. Clearly that's not
the behaviour a normal user is likely to expect. This can be
fixed by using two separate tee processes both appending to the same
log file:
#!/bin/bash
# See (and upvote) the comment by JamesThomasMoon1979
# explaining the use of the -i option to tee.
exec > >(tee -ia foo.log)
exec 2> >(tee -ia foo.log >&2)
echo "foo"
echo "bar" >&2
(Note that the above does not initially truncate the log file - if you want that behaviour you should add
>foo.log
to the top of the script.)
The POSIX.1-2008 specification of tee(1)
requires that output is unbuffered, i.e. not even line-buffered, so in this case it is possible that STDOUT and STDERR could end up on the same line of foo.log
; however that could also happen on the terminal, so the log file will be a faithful reflection of what could be seen on the terminal, if not an exact mirror of it. If you want the STDOUT lines cleanly separated from the STDERR lines, consider using two log files, possibly with date stamp prefixes on each line to allow chronological reassembly later on.
Solution 3
Solution for busybox, macOS bash, and non-bash shells
The accepted answer is certainly the best choice for bash. I'm working in a Busybox environment without access to bash, and it does not understand the exec > >(tee log.txt)
syntax. It also does not do exec >$PIPE
properly, trying to create an ordinary file with the same name as the named pipe, which fails and hangs.
Hopefully this would be useful to someone else who doesn't have bash.
Also, for anyone using a named pipe, it is safe to rm $PIPE
, because that unlinks the pipe from the VFS, but the processes that use it still maintain a reference count on it until they are finished.
Note the use of $* is not necessarily safe.
#!/bin/sh
if [ "$SELF_LOGGING" != "1" ]
then
# The parent process will enter this branch and set up logging
# Create a named piped for logging the child's output
PIPE=tmp.fifo
mkfifo $PIPE
# Launch the child process with stdout redirected to the named pipe
SELF_LOGGING=1 sh $0 $* >$PIPE &
# Save PID of child process
PID=$!
# Launch tee in a separate process
tee logfile <$PIPE &
# Unlink $PIPE because the parent process no longer needs it
rm $PIPE
# Wait for child process, which is running the rest of this script
wait $PID
# Return the error code from the child process
exit $?
fi
# The rest of the script goes here
Solution 4
Inside your script file, put all of the commands within parentheses, like this:
(
echo start
ls -l
echo end
) | tee foo.log
Solution 5
Easy way to make a bash script log to syslog. The script output is available both through /var/log/syslog
and through stderr. syslog will add useful metadata, including timestamps.
Add this line at the top:
exec &> >(logger -t myscript -s)
Alternatively, send the log to a separate file:
exec &> >(ts |tee -a /tmp/myscript.output >&2 )
This requires moreutils
(for the ts
command, which adds timestamps).
Related videos on Youtube
Vitaly Kushner
I'm one of the founders of Astrails Ltd - a web development company with focus on Ruby on Rails and other latest technologies (Erlang anyone?). I have more then 15 years of professional experience. Using Rails from 2005. If you need help with anything Web related: prototyping, development, deployment, code/security audits, usability assessments, Rails training etc, you can contact us and we'll be happy to help. More info: astrails.com LinkedIn profile Twitter profile
Updated on August 10, 2020Comments
-
Vitaly Kushner over 3 years
I know how to redirect stdout to a file:
exec > foo.log echo test
this will put the 'test' into the foo.log file.
Now I want to redirect the output into the log file AND keep it on stdout
i.e. it can be done trivially from outside the script:
script | tee foo.log
but I want to do declare it within the script itself
I tried
exec | tee foo.log
but it didn't work.
-
William Pursell almost 14 yearsYour question is poorly phrased. When you invoke 'exec > foo.log', the stdout of the script is the file foo.log. I think you mean that you want the output to go to foo.log and to the tty, since going to foo.log is going to stdout.
-
Vitaly Kushner almost 14 yearswhat I'd like to do is to use the | on the 'exec'. that would be perfect for me, i.e. "exec | tee foo.log", unfortunately you can not use pipe redirection on the exec call
-
kvantour over 4 years
-
-
Vitaly Kushner almost 14 yearstail will leave a running process behind in the 2nd script tee will block, or you will need to run it with & in which case it will leave process as in 1st one.
-
David Z almost 14 years@Vitaly: oops, forgot to background
tee
- I've edited. As I said, neither is a perfect solution, but the background processes will get killed when their parent shell terminates, so you don't have to worry about them hogging resources forever. -
glenn jackman almost 14 yearspedantically, could also use braces (
{}
) -
William Pursell almost 14 yearsYikes: these look appealing, but the output of tail -f is also going to foo.log. You can fix that by running tail -f before the exec, but the tail is still left running after the parent terminates. You need to explicitly kill it, probably in a trap 0.
-
Vitaly Kushner almost 14 yearswell yeah, I considered that, but this is not redirection of the current shell stdout, its kind of a cheat, you actually running a subshell and doing a regular piper redirection on it. works thought. I'm split with this and the "tail -f foo.log &" solution. will wait a little to see if may be a better one surfaces. if not probably going to settle ;)
-
Admin about 12 yearsTee on most systems is buffered, so output may not arrive until after the script has finished. Also, since this tee is running in a subshell, not a child process, wait cannot be used to synchronize output to the calling process. What you want is an unbuffered version of tee similar to bogomips.org/rainbows.git/commit/…
-
Admin about 12 yearsThis is also likely to leak tee processes.
-
DevSolar about 12 years@Barry: Would you care to elaborate how you make this "leak tee processes"?
-
Admin about 12 years{ } executes a list in the current shell environment. ( ) executes a list in a subshell environment.
-
DevSolar about 12 years@Barry: POSIX specifies that
tee
should not buffer its output. If it does buffer on most systems, it's broken on most systems. That's a problem of thetee
implementations, not of my solution. -
Admin about 12 yearsThe main reason this is fragile is all of the extra processes that are started. If one ever has to kill or restart it, all of the related script processes will need to be killed one-by-one (HUP is not sent to them if backgrounded). Also, it also multiple concurrent writers and doesn't handle any errors. Consider adding -e to the hashbang.
-
Admin about 12 yearsYeap. If the script is backgrounded, it leaves processes all over.
-
Sebastian about 12 yearsHow can I stop logging using this method? i.e. reset
stdout
to only terminal (and notlogfile.txt
). I might ask new question as well, but it is very related. -
DevSolar about 12 years@Sebastian:
exec
is very powerful, but also very involved. You can "back up" the current stdout to a different filedescriptor, then recover it later on. Google "bash exec tutorial", there's lots of advanced stuff out there. -
Adam Spiers over 11 years@Barry: I cannot make this approach leak tee processes no matter what I try. Please provide a test case.
-
DevSolar over 11 years@AdamSpiers: I'm not sure what Barry was about, either. Bash's
exec
is documented not to start new processes,>(tee ...)
is a standard named pipe / process substitution, and the&
in the redirection of course has nothing to do with backgrounding... ?:-) -
Luca Borrione over 11 yearsI copied this snippet on a file and then I ran it using 'bash myfile.sh' on a new terminal window and it does the trick but it keeps hanging until I press ctrl+c or I put an exit 1 at the end of the script. Why and how to avoid this? Thanks
-
Chris Johnson over 11 yearsWhen I try this, I receive an error message objecting to one or the other of the ">" characters: syntax error near unexpected token `>'. I'm running GNU bash, version 4.1.2(1). Any ideas?
-
DevSolar over 11 years@ChrisJohnson: This works for me on various bash versions ranging from 3.1.17 to 4.1.10. I have no idea where your problem comes from.
-
GergelyPolonkai over 11 yearsSame "unexpected token" problem here with bash 4.2.37
-
DevSolar over 11 years@LucaBorrione: The script is not hanging, you just get the output of the script after the new prompt.
exit 1
doesn't actually change that. -
GergelyPolonkai over 11 years@DevSolar the problem was that the calling script was invoking
sh myscript.sh
instead ofbash myscript.sh
. Sorry for not checking before posting. -
abourget about 11 yearsThen, would there be a way to restore the output, or to force something to be output to the real original STDOUT ?
-
DevSolar about 11 years@abourget: Yes there is, but that's a different (and separate) question.
-
oHo almost 11 yearsSimilar answer as the second idea from David Z. Have a look at its comments. +1 ;-)
-
alveko almost 10 yearsFor some reason, in my case, when the script is executed from a c-program system() call, the two tee sub-processes continue to exist even after the main script exits. So I had to add traps like this:
exec > >(tee -a $LOG)
trap "kill -9 $! 2>/dev/null" EXIT
exec 2> >(tee -a $LOG >&2)
trap "kill -9 $! 2>/dev/null" EXIT
-
Andy Ray almost 9 yearsThis strips colors from stdout output. Is there any way to keep color?
-
DevSolar almost 9 years@AndyRay: This is an issue of tools (like
grep
) auto-detecting whether their output is to a terminal or a file, and adjusting their output accordingly. Since you are piping your output, these tools detect "not a terminal" and do not generate ANSI escapes. In the case ofgrep
, you can give the option--color=always
to enforce color. Other tools have similar options. -
JamesThomasMoon almost 9 yearsI suggest passing
-i
totee
. Otherwise, signal interrupts (traps) will disrupt stdout in the main script. For example, if you have atrap 'echo foo' EXIT
and then pressctrl+c
, you will not see "foo". So I would modify the answer toexec &> >(tee -ia file)
. -
JamesThomasMoon almost 9 yearsI suggest passing
-i
totee
. Otherwise, signal interrupts (traps) will disrupt stdout in the script. For example, if youtrap 'echo foo' EXIT
and then pressctrl+c
, you will not see "foo". So I would modify the answer toexec > >(tee -ia foo.log)
. -
Chris Johnson almost 9 yearsWorks well. I'm not understanding the
$logfile
part oftee < ${logfile}.pipe $logfile &
. Specifically, I tried to alter this to capture full expanded command log lines (fromset -x
) to file while only showing lines without leading '+' in stdout by changing to(tee | grep -v '^+.*$') < ${logfile}.pipe $logfile &
but received an error message regarding$logfile
. Can you explain thetee
line in a little more detail? -
Sam Watkins almost 9 yearsI made some little "sourceable" scripts based on this. Can use them in a script like
. log
or. log foo.log
: sam.nipl.net/sh/log sam.nipl.net/sh/log-a -
erikbstack over 8 yearsWhat could be the reason if it always wrties an error message
/dev/fd/<Number>: no such file
? In the end he log file exists but is empty, streams seem to get printed as normal, not redirected and buffered by tee. -
DevSolar over 8 years@erikb85: Please don't post questions as comments. Post a question instead. (I'd welcome it if you'd delete that comment.)
-
erikbstack over 8 years@DevSolar This is debugging the proposed solution, not a separate question. I thought instead of just down voting and saying it doesn't work it would be better to discuss what doesn't work. Copy&Paste results in the error message.
-
DevSolar over 8 years@erikb85:
man bash
, section Process Substitution: "Process substitution is supported on systems that support named pipes (FIFOs) or the/dev/fd
method of naming open files. It takes the form of<(list)
or>(list)
. The process list is run with its input or output connected to a FIFO or some file in/dev/fd
." --- At which point I'd look at your system and why bash thinks there should be/dev/fd/...
when there is not. A problem of your system, not the solution presented here (and elsewhere, this is by no means an invention of myself). -
erikbstack over 8 yearsYeah but why should my Ubuntu be different than others? Wouldn't believe that this only works on some weird Unix versions people don't use.
-
DevSolar over 8 years@erikb85: This is no "trick" that only works on some weird Unix version, this is a documented bash feature. Tested to work on Cygwin bash 4.3, Ubuntu/Mint bash 4.3, AIX bash 4.3, and SLES bash 3.2 (!!), just by myself and just this morning. I don't know which Unix the other 160 upvoters have been using over the last five years, or why it's so hard for you to understand that you're barking up the wrong tree. Please post your own question and delete your comments here. You are not adding any value to this answer, just noise. You keep this up, and I flag it for mod attention.
-
Jon Carter over 8 yearsDamn. Thank you. The accepted answer up there didn't work for me, trying to schedule a script to run under MingW on a Windows system. It complained, I believe, about unimplemented process substitution. This answer worked just fine, after the following change, to capture both stderr and stdout: ``` -) | tee foo.log +) 2>&1 | tee foo.log
-
CMCDragonkai about 8 yearsThe problem with this method is that messages going to
STDOUT
appear first as a batch, and then messages going toSTDERR
appear. They are not interleaved as usually expected. -
Lars Noschinski over 7 yearsPlease note that with this solution, tee will keep running even after the script finished. This may result in e.g. a SSH connection not finishing after termination of the script.
-
DevSolar over 7 years@LarsNoschinski: I note that Barry had the same comment to make, was asked by Adam Spiers to provide a test case, and has fallen silent. Also note my comment from Aug 10 '12 at 10:56. I would welcome a test case.
-
Darren Oakey over 7 yearsthis was fairly nasty for me - it changed the output slightly, losing the initial carriage return - but also for some reason then required you to hit enter to continue. I tried the {... } 2>&1 | tee the.log from below - much cleaner and for me behaved as the original script did
-
Mike Baglio Jr. over 5 yearsThis is the only solution I've seen so far that works on mac
-
akhan almost 5 yearsAlso see here on how to pipe the output to another program using exec,
ts
for instance. -
BrainStone almost 4 yearsIs there a way to also log everything sent through stdin? Like I have a few reads in my script and those are not captured...
-
DevSolar almost 4 years@BrainStone: I'd suggest posting that as a separate question.
-
BrainStone almost 4 years@DevSolar fair point. Here it is: stackoverflow.com/q/62291762/1996022
-
HeroCC almost 4 yearsI tested this out and it seems this answer doesn't preserve STDERR (it is merged with STDOUT), so if you rely on the streams being separate for error detection or other redirection, you should look at Adam's answer.
-
mles over 3 yearsIt seems your solutions sends only stdout to a separate file. How do I send stdout and stderr to a separate file?
-
Ben Farmer about 3 yearsErr yeah so the fact that it keeps capturing stdout to the file after the script finishes is a pretty serious problem and makes the current solution completely unusable. The answer needs to include the step of ending the logging!
-
Ben Farmer about 3 yearsFor me this answer is way simpler and easier to understand than the accepted one, and also doesn't keep redirecting output after the script finishes like the accepted answer does!
-
DevSolar about 3 years@BenFarmer: And with an answer standing for eleven years, you didn't care to double-check your assertion, or come up with a MCRE? Because this solution doesn't "keep capturing stdout". What it does is printing the new command prompt before the stdout from the script, which might catch some people unaware. But the "solution" is to just carry on (or press Enter once again). There is no "ending the logging".
-
Ben Farmer about 3 years@DevSolar: Untrue. It continues capturing if you run the script by sourcing it into the current shell, i.e. with ".", which is the only way I can run scripts on my current machine due to security settings. If you run it in a subshell it is fine, but this can catch people out.
-
DevSolar about 3 years@BenFarmer If you can only run in your current shell "for security reasons" your system is pretty much FUBAR to begin with. Note that many other kinds of resource acquisition will also fail to release in your case, because Unix environments rely on process cleanup. That is only one point where your system's "security" setup compromises on your security. But at least I understand now where you are coming from. I might add a cleanup if I find the time. -- Note the comment from August 2012 that already touches on the issue.