How do I see all previous output from a completed terminal command?

40,373

My experience is that the consensus in the comments is correct - once the terminal's buffer has been exceeded, that data is lost (or as good as - it could possibly be in memory that hasn't been overwritten yet) - and because of this you can't retroactively increase the buffer size.

This answer is borderline somewhere between a comment, an answer and perhaps overkill for your situation. It's more of a suggested approach that may address your situation - particular the problem of not knowing you need the log until it is too late (non-causal problems are hard) but it is not a direct answer to your question.

In any case, it was too long for a comment. I'm not explicitly listing all the code required to implement this approach, mostly because there are a bunch of implementation decisions that need to be made; if you need more detailed info I'd be glad to provide it.

Script is far from pleasant to deal with

First off, the script utility has been suggested as a 'stopgap' to prevent the loss of data without increasing the buffer size (which has security implications when set to unlimited). If there was ever a utility that needed some TLC, script is it. Then again, it was developed by the kernel team. Read into that as you will.

I find script to frequently be more trouble than its worth (post processing it to make it semi-human readable, etc), and instead have started to use a simplified method to log stdout, stdin, and/or stderr. In some sense this is recreating script, but with full control instead of being at the mercy of the hard coded script logging settings.

This approach could relatively seamlessly be integrated into your shell sessions, and in the rare cases that you did overflow the terminals buffer, you'll have a temporary file with those contents. In order to keep the logging 'clean', there are some housekeeping steps you'll have to address. Additionally, the same security issues (log of all terminal output) will by default exist; there is a simple method to encrypt the logs however.

There are 3 basic steps:

  1. Configure redirection so that you split stdout (and stderr if desired) to a file and to the terminal. I kept this example simple and am not directing stdin or stderr to the file - however if you understand the stdout redirection example, the rest is trivial.
  2. Configure .bashrc so this logging starts whenever a shell is opened.
  3. When a given shell is closing, use the bash builtin TRAP to call user code which will terminate the session logging (you can delete the file, archive it, etc.)

With this approach you will effectively have an invisible safety net that will allow to see the entire history of a given shell session (based on what you redirect - again, to simplify things I am only showing stdout); when you don't need it you shouldn't even know it's there.

Details

1. Configure Redirection

The following code snippet will create a file descriptor 3, which points at a log file. stdout is redirected to 3, and using tee, we then split that stream back into the terminal (equivalent to stdout). You can trivially add stderr to the same command / log file, pipe it to a different file, or leave it as is (unlogged).

logFile=$(mktemp -u)
exec 3>&1 1> >(tee $logFile >&3)
  • You'll find this log file to be far cleaner than that generated by script; it doesn't stored backspaces, linefeeds and other special characters that are frequently unwanted.

  • Note that if you want the logFile encrypted, you can do that fairly easily by adding an additional pipe stage after the tee command through openssl.

2. Automate the log generation

In .bashrc add the same code as above. Each time a new shell is created a log file specific to that session will be created.

export logFile=$(mktemp -u)
exec 3>&1 1> >(tee $logFile >&3)
echo "Current session is being logged in $logFile"

3. Automatically close out logging when shell is closing

If you want the log file to be deleted when the session is ended you can use the bash built-in trap function to detect the session is ending and call a function to address the log file, for example (also in .bashrc).

trap closeLog EXIT

closeLog () {
  rm -f "$logFile" >/dev/null 2>&1
}

Session logging cleanup could be handled in a number of different ways. This approach will get called when the shell is closing by trapping the 'exit' signal. At this point you could delete the log file, move it / rename it, or any number of things to clean it up. You also could have the log files cleaned up by a cron job rather than via a TRAP (if this approach is used, I'd suggest a periodic cleanup task if you don't already have one configured for the /tmp directory; as if the bash shell crashes the EXIT trap will not get triggered).

Note on handling subshells

An interesting situation will develop with subshells. If a new interactive shell is opened on top of an existing one, a new log will be created, and everything should work fine. When that shell is exited (returning to the parent), logging on that file will resume. If you want to address this more cleanly - perhaps even maintaining a common log for subshells (interactive or otherwise), you will need to detect (in .bashrc) that you are in a nested subshell, and redirect to the parent's log file rather than creating a new one. You will also need to check if you are in a subshell so that your 'trap' call doesn't delete the parent's log file on exit. You can get the the nested shell level from the bash environmental variable SHLVL, which stores the 'depth' of your shell stack.

Note on keeping your log 'clean':

If you do redirect stdin to the log file, you will end up with many of the same unwanted artifacts as the script utility generates. This can be addressed by adding a filter stage (e.g. sed/grep) between the redirection and the file. Simply create a regex that removes anything you don't want logged. To fully clean it up would require some fairly in depth processing (perhaps buffering each new line prior to writing to file, cleaning that up then writing it). Otherwise it will be difficult to know when a backspace is 'garbage' or intended.

Share:
40,373

Related videos on Youtube

JonahHuron
Author by

JonahHuron

Updated on September 18, 2022

Comments

  • JonahHuron
    JonahHuron over 1 year

    I've executed a command in gnome terminal that printed more output to the terminal than I expected. I'd like to read the entire output, but the terminal scroll stops before reaching the beginning.

    I understand that I can change the terminal profile settings to enable unlimited scrolling, or pipe the output to a file, etc. All of these common solutions apply to future output, however.

    How do I view the complete terminal output of a command that has already been executed?

    Edit: All right, it can't be done. Thanks, everybody!

    • Admin
      Admin almost 8 years
      I don't think it's possible to retrieve it.
    • Admin
      Admin almost 8 years
      @MateiDavid is right, there isn't STDOUT history on the filesystem. If you don't have logging enabled for your terminal then you are out of luck.
    • Admin
      Admin almost 8 years
      That's right, it cannot be recovered. (If it would be possible to recover despite having a smaller value configured, there wouldn't be such a setting and gnome-terminal always remembered the entire output.)
    • Admin
      Admin almost 8 years
      To be absolutely correct: Similarly to how sometimes it's possible to retrieve the contents of a deleted file, it might be possible to retrive a bit more of those contents if you stop using that g-t (but keep it open) and examine its files. This became much harder in vte-0.40 with scrollback encryption but in some cases it still might be possible, although I'm pretty sure it wouldn't be worth it for you. (I can take a look if you first transfer a decent sum to my bank account and then grant access to your computer – just kidding :D)
    • Admin
      Admin almost 8 years
      (A note about scrollback encryption: In order to decrypt it, the key would need to be located in gnome-terminal server process's memory. After exiting from gnome-terminal, or closing the said tab, the key is lost for good.)
    • Admin
      Admin almost 8 years
      Just invoke script and press Enter before the command of interest. From this moment all output to the terminal will be captured into the buffer. When finished press Ctrl+d to dump all output to file typescript in the same directory.
  • Matei David
    Matei David almost 8 years
    Note that your solution causes stdout to be detected as not a tty ([ -t 1 ] returns false). Surely there are programs out there that would find this confusing.
  • Argonauts
    Argonauts almost 8 years
    well typically your .bashrc would check if it was being sourced into an interactive shell or not before initializing; so for most situations where that would be problematic (non interactive remote connections like scp, etc), it would be a non issue. It is entirely possible that it would break something though. I haven't seen it, but i'd guess that a program like tmux might crash and burn. Still better than script, which absolutely won't work with remote connections or applications like tmux. Clarification - script WILL work remotely, but the log file created is horrific.
  • Qinsheng Zhang
    Qinsheng Zhang over 3 years
    It is really a clean introduction. Also, I think it maybe better to have a log module in your program instead of rely on bash.