How do you keep only the last n lines of a log file?
Solution 1
It is possible like this, but as others have said, the safest option is the generation of a new file and then a move of that file to overwrite the original.
The below method loads the lines into BASH, so depending on the number of lines from tail
, that's going to affect the memory usage of the local shell to store the content of the log lines.
The below also removes empty lines should they exist at the end of the log file (due to the behaviour of BASH evaluating "$(tail -1000 test.log)"
) so does not give a truly 100% accurate truncation in all scenarios, but depending on your situation, may be sufficient.
$ wc -l myscript.log
475494 myscript.log
$ echo "$(tail -1000 myscript.log)" > myscript.log
$ wc -l myscript.log
1000 myscript.log
Solution 2
The utility sponge
is designed just for this case. If you have it installed, then your two lines can be written:
tail -n 1000 myscript.log | sponge myscript.log
Normally, reading from a file at the same time that you are writing to it is unreliable. sponge
solves this by not writing to myscript.log
until after tail
has finished reading it and terminated the pipe.
Install
To install sponge
on a Debian-like system:
apt-get install moreutils
To install sponge
on a RHEL/CentOS system, add the EPEL repo and then do:
yum install moreutils
Documentation
From man sponge
:
sponge
reads standard input and writes it out to the specified file. Unlike a shell redirect,sponge
soaks up all its input before writing the output file. This allows constructing pipelines that read from and write to the same file.
Solution 3
definitely "tail + mv" is much better! But for gnu sed we can try
sed -i -e :a -e '$q;N;101,$D;ba' log
Solution 4
For the record, with ed
you could do something like
ed -s infile <<\IN
0r !tail -n 1000 infile
+1,$d
,p
q
IN
This opens infile
and r
eads in the output of tail -n 1000 infile
(i.e. it inserts that output before the 1st line) and then delete from what was initially the 1st line to the end of file. Replace ,p
with w
to edit the file in-place.
Keep in mind though that ed
solutions aren't suitable for large files.
Related videos on Youtube
![dr_](https://i.stack.imgur.com/zkurB.jpg?s=256&g=1)
dr_
Updated on September 18, 2022Comments
-
dr_ almost 2 years
A script I wrote does something and, at the end, appends some lines to its own logfile. I'd like to keep only the last n lines (say, 1000 lines) of the logfile. This can be done at the end of the script in this way:
tail -n 1000 myscript.log > myscript.log.tmp mv -f myscript.log.tmp myscript.log
but is there a more clean and elegant solution? Perhaps accomplished via a single command?
-
Ipor Sircer almost 8 years
logrotate
is the elegant solution -
dr_ almost 8 yearsI've thought of it, but the logrotate configuration would be longer than the script itself...
-
kba almost 8 yearsIf logrotate is overkill, your solution is about as elegant as it gets. With sed/awk you might be able to do it in one line but not without a temp file internally, so it's probably not more efficient and probably less readable.
-
Mohamad Osama over 2 yearsI found better way to get last couple of days log only
days1=$(date +%Y-%m-%d -d "1 day ago")
days0=$(date +%Y-%m-%d)
grep -i "\|$days1\|$days0" myscript.log > myscript.log.new
mv myscript.log.new myscript.log
-
-
dr_ almost 8 years+1 Thanks, I did not know
sponge
. Very useful for all those who learnt the hard way that you cannot dosort importantfile.txt > importantfile.txt
:) -
dr_ almost 8 yearsSmart. I marked this as the accepted answer as it doesn't require installation of additional tools. I wish I could accept both yours and @John1024's answer.
-
parkamark almost 8 yearsYour call. I upvoted the sponge solution as I didn't know about it and it is guaranteed not to mess with empty log lines. This solution has the potential to do that, depending on the log file content.
-
Alex Baranovskyi over 4 yearsThis solution has a race condition. If you are unlucky, the redirect -into- the file happens before the reading -from- the file and you end up with an empty file.
-
Artem Russakovskii over 4 yearsBrilliant solution. Worked like a charm.
-
cha almost 4 yearsThe solution works really well. But the disk is still full. I do not understand what happened. I have a 80Gb drive and one log file grew out of control and was 33Gb. The disk had only 8Gb remaining. I truncated the log file using the above solution and it is now 100Mb. However, the
df
utility still showing almost the same free disk space of about 8Gb. How is it possible? -
Eric almost 3 yearsCare to explain whot this magic actually do? Thanks.
-
Kusalananda almost 3 yearsIt is not clear from your text in what ways this is better than what the user already uses.
-
Jeremy Boden almost 3 yearsYou wouldn't delete old log messages individually. You would start a new log file and archive the old one. Use
logrotate
to do this automatically for you. -
Namasivayam Chinnapillai almost 3 yearsthere would be some exception case might come like no free space available to store log archive file (logrotate) in the server. For that kind of situation we have to keep only latest logs and remove other old log entries. This script helped me to achieve my requirement without increase the disk space and keeping latest log entries.
-
Peter VARGA over 2 years@Coroos I ran this command for a 6GB file and no issues.
-
Alex Baranovskyi over 2 yearsThe race condition is between the opening of the file for reading by
tail
and the piping into the same filename by the shell that executes the command. The size of the file does not matter. -
Jon V about 2 yearsIt's possible that the process that writes to the log is still holding on to the original file handle, and so the system things that the storage is still being consumed? Maybe restart that process?
-
Admin almost 2 yearsNote that this doesn't preserve file ownership