Very large log files, what should I do?

154,193

Solution 1

Simply delete these files and then reboot?

No. Empty them but do not use rm because it could end up crashing something while you are typing the touch command to recreate it.

Shortest method:

cd /var/log
sudo su
> lastlog
> wtmp
> dpkg.log 
> kern.log
> syslog
exit

If not root it will require sudo. Taken from another answer on AU.

BEFORE YOU DO THAT. Do a tail {logfile} and check if there is a reason for them to be so big. Unless this system is several years old there should be no reason for this and fixing the problem is better than letting this go on.

Both kern.log and syslog should normally not be that big. But like I said: if this system is up and running for years and years it might be normal and the files just need to be cleared.

And to prevent it to become that big in the future: setup logrotate. It is pretty straightforward and will compress the logfile when it becomes bigger then a size you set it to.


1 other thing: if you do not want to delete the contents you can compress the files by tarring or gzipping them. That will have you end up with files probably 10% of what they are now. That is if there is still room on the disk to do that.

Solution 2

It's probably worth trying to establish what is filling the log(s) - either by simply examining them visually using the less or tail command

tail -n 100 /var/log/syslog

or if the offending lines are too deeply buried to easily see what's occuring, something like

for log in /var/log/{dmesg,syslog,kern.log}; do 
  echo "${log} :"
  sed -e 's/\[[^]]\+\]//' -e 's/.*[0-9]\{2\}:[0-9]\{2\}:[0-9]\{2\}//' ${log} \
  | sort | uniq -c | sort -hr | head -10
done

(note: this may take some time, given such large files) which will attempt to strip off the timestamps and then count the most frequently occurring messages.

Solution 3

My method for clean system log files is this. Steps 1 and 2 are optional, but sometimes you need check older logs and backup is sometimes useful. ;-)

  1. Optional: Copy log file

    cp -av --backup=numbered file.log file.log.old
    
  2. Optional: Use Gzip on copy of log

    gzip file.log.old
    
  3. Use /dev/null for clean file

    cat /dev/null > file.log
    

And we use for this logs (only on several servers) logrotate and weekly execute by cron script which all files with *.1 (or next rotated) compress by gzip.

Solution 4

I installed Ubuntu 16.04 today and I noticed the same problem. However, I fixed this with busybox-syslogd. Yup! I've Just installed that package and problem has been solved. :)

$ sudo apt-get install busybox-syslogd

After installing that package, reset syslog and kern.log:

sudo tee /var/log/syslog /var/log/kern.log </dev/null

I hope this simple solution is useful to other people around.

Share:
154,193

Related videos on Youtube

Masroor
Author by

Masroor

I teach. And love it.

Updated on September 18, 2022

Comments

  • Masroor
    Masroor over 1 year

    (This question deals with a similar issue, but it talks about a rotated log file.)

    Today I got a system message regarding very low /var space.

    As usual I executed the commands in the line of sudo apt-get clean which improved the scenario only slightly. Then I deleted the rotated log files which again provided very little improvement.

    Upon examination I find that some log files in the /var/log has grown up to be very huge ones. To be specific, ls -lSh /var/log gives,

    total 28G
    -rw-r----- 1 syslog            adm      14G Aug 23 21:56 kern.log
    -rw-r----- 1 syslog            adm      14G Aug 23 21:56 syslog
    -rw-rw-r-- 1 root              utmp    390K Aug 23 21:47 wtmp
    -rw-r--r-- 1 root              root    287K Aug 23 21:42 dpkg.log
    -rw-rw-r-- 1 root              utmp    287K Aug 23 20:43 lastlog
    

    As we can see, the first two are the offending ones. I am mildly surprised why such large files have not been rotated.

    So, what should I do? Simply delete these files and then reboot? Or go for some more prudent steps?

    I am using Ubuntu 14.04.

    UPDATE 1

    To begin with, the system is only several months old. I had to install the system from scratch couple of months back after a hard disk crash.

    Now, as advised in this answer, I first checked the offending log files using tail, no surprise there. Then, for deeper inspection, I executed this script from the same answer.

    for log in /var/log/{syslog,kern.log}; do 
      echo "${log} :"
      sed -e 's/\[[^]]\+\]//' -e 's/.*[0-9]\{2\}:[0-9]\{2\}:[0-9]\{2\}//' ${log} \
      | sort | uniq -c | sort -hr | head -10
    done
    

    The process took several hours. The output was in the line of,

    /var/log/syslog :
    71209229  Rafid-Hamiz-Dell kernel:  sda3: rw=1, want=7638104968240336200, limit=1681522688
    53929977  Rafid-Hamiz-Dell kernel:  attempt to access beyond end of device
    17280298  Rafid-Hamiz-Dell kernel:  attempt to access beyond end of device
       1639  Rafid-Hamiz-Dell kernel:  EXT4-fs warning (device sda3): ext4_end_bio:317: I/O error -5 writing to inode 6819258 (offset 0 size 4096 starting block 54763121030042024)
           <snipped>
    
    /var/log/kern.log.1 :
    71210257  Rafid-Hamiz-Dell kernel:  attempt to access beyond end of device
    71209212  Rafid-Hamiz-Dell kernel:  sda3: rw=1, want=7638104968240336200, limit=1681522688
       1639  Rafid-Hamiz-Dell kernel:  EXT4-fs warning (device sda3): ext4_end_bio:317: I/O error -5 writing to inode 6819258 (offset 0 size 4096 starting block 954763121030042024)
    

    (/dev/sda3 is my home directory. As we can find,

    lsblk /dev/sda
    NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
    sda      8:0    0 931.5G  0 disk 
    ├─sda1   8:1    0 122.1G  0 part /
    ├─sda2   8:2    0   7.6G  0 part [SWAP]
    └─sda3   8:3    0 801.8G  0 part /home
    

    Why a process will want to write beyond the limit is actually outside the scope of my comprehension. Perhaps I will want to ask a different question in this forum if this continues even after a system update.)

    Then, from this answer (you may want to check this for a deeper understanding), I executed,

    sudo su -
    > kern.log
    > syslog
    

    Now, these files have zero sizes. The system is running fine before and after a reboot.

    I will watch these files (along with others) in the next few days and report back should
    they behave out-of-line.

    As a final note, both the offending files (kern.log and syslog), are set to be rotated, as inspection of the files (grep helped) inside /etc/logrotate.d/ shows.

    UPDATE 2

    The log files are actually rotated. Looks like the large sizes were attained on a single day.

    • douggro
      douggro over 9 years
      Is there anything in those log files that lends a clue as to why they are so large? Delete and reboot, then monitor them to see if they grow in some exponential fashion.
    • Masroor
      Masroor over 9 years
      @douggro Indeed there are. Please see my update to the question.
    • Bhaskar
      Bhaskar about 4 years
      I had this issue and it was because of loads of docker-containers running in background..
  • Janus Troelsen
    Janus Troelsen almost 9 years
    wtmp: Command not found Which package is this?
  • Rinzwind
    Rinzwind almost 9 years
    /var/log/wtmp is not a command but a log file. Where does my answer state you can execute wtmp? ;-)
  • Janus Troelsen
    Janus Troelsen almost 9 years
    I thought > was a prompt and tried "lastlog" and it worked, so I assumed that I understood correctly :P
  • Aaron Franke
    Aaron Franke over 7 years
    What, exactly, does this package do, and how does this solution work?
  • Gayan
    Gayan over 7 years
    This issue is keeps happening to me. I'm using ubuntu 16.04. Could you tell what seems to course this. Thanks in advance!
  • Rinzwind
    Rinzwind over 7 years
    I/O errors will be hardware related. Faulty cable. Faulty hard disk. Or a faulty filesystem. "attempt to access beyond end of device" seems serious.
  • Sergiy Kolodyazhnyy
    Sergiy Kolodyazhnyy over 7 years
    @Gayan hi there ! I was looking at the errors that you provided in original question. Looks like something was writing to same inode, 6819258 . Check if that the same inode in your 16.04. Regardless if it is the same or different, consider checking to what file does this inode belong , see this for a few methods how to do so. Maybe checking what file is being written to might shed a clue on the cause of the issue. Also, don't discount Rinzwind's suggestion - it could potentially be related to hardware
  • Rinzwind
    Rinzwind over 7 years
    @Gayan did you ever do a file system check? do a sudo touch /forcefsck and reboot. It will start a file system check :)
  • SDsolar
    SDsolar over 6 years
    I am dubious about this post since those files wouldn't have a chance to grow large in a single day. So I will hold off until I hear from others about this program.
  • WinEunuuchs2Unix
    WinEunuuchs2Unix over 6 years
    I actually ran into problems using touch to recreate /var/log/syslog as you warn about. +1 for belated education :)
  • Luís de Sousa
    Luís de Sousa over 5 years
    Unfortunately, this solution does not work on Ubuntu 18.04.
  • Luís de Sousa
    Luís de Sousa over 5 years
    This is the way to go on Ubuntu 18.04.
  • Rinzwind
    Rinzwind over 5 years
    Then you are doing something wrong. Since these are core Linux tools they work on almost any Linux :)
  • Tor Klingberg
    Tor Klingberg over 5 years
    This answer does not adequately describe what you are supposed to do with lastlog, wtmp, dpkg.log, kern.log and syslog.
  • Rinzwind
    Rinzwind over 5 years
    @TorKlingberg that was not the question so the answer indeed does not reflect that
  • Sudip Bhandari
    Sudip Bhandari almost 5 years
    This should be the accepted answer. When logs are filling up rapidly like that (despite logrotate) something is inherently wrong and is worth digging deeper into
  • Chagai Friedlander
    Chagai Friedlander about 3 years
    @TorKlingberg thanks for your comment, took me time to understand this.... you can clear the log file by excecuting > logfilename like explained here
  • Tor Klingberg
    Tor Klingberg about 3 years
    I can't remember what I meant with my comment from two years ago, but apparently 15 other people agreed, so I guess it stays. Perhaps I didn't understand that > redirects nothing into the file, and though it was a prompt.