Disk space usage doesn't add up with df & du

17,764

You probably have some deleted big log file, database file or something similar lying around, waiting for the process holding the file releasing it.

In Linux, a file deletion simply unlinks the file. It actually gets deleted when there's no file handles connected to that file anymore. So, if you have a 2 GB log file which you delete manually with rm, the disk space will not be freed until you restart syslog daemon (or send HUP signal to it).

Try

lsof -n | grep -i deleted

and see if you have any deleted zombie files still floating around.

Share:
17,764
Codecraft
Author by

Codecraft

Codecraft Online is a small company based in Leeds, UK. We work mainly with graphic design agencies around the country to help them integrate dynamic content or full content management systems into their designs. We also do entire site builds, and JQuery components. Check out our website: www.codecraft.co.uk Follow us on Twitter: @CodecraftOnline

Updated on September 18, 2022

Comments

  • Codecraft
    Codecraft almost 2 years

    I'm trying to free up some disk space - if I do a df -h, I have a filesystem called /dev/mapper/vg00-var which says its 4G, 3.8G used, 205M left.

    That corresponds to my /var directory.

    If I descend into /var and do du -kscxh *, the total is 2.1G

    2.1G + 200M free = 2.3G... So my question is, where is the remaining 1.7G ?

    • Kyle Smith
      Kyle Smith about 12 years
      What does du -shx /var say?
    • cjc
      cjc about 12 years
      You could also have deleted files that have open file handles. The OS won't release the space until the handles are closed, but you won't see them with "du". You can run "lsof /var |grep deleted" (or something similar) to see those. This would actually not be a surprising finding in, say, /var/log, if the logs are rotated but the logging process isn't HUP'ed in the right way.
    • Codecraft
      Codecraft about 12 years
      I had been deleting some log files that had gone crazy, it seemed as though they hadn't been rotating, but anyway, I had a one word email from a friend 'reboot' - figured he was being sarcastic but apparently not :) I have found my disk space again.... disaster averted (for now).
    • cjc
      cjc about 12 years
      @Codecraft Yeah, rebooting will definitely clear any open file handles, although that's sort of like cracking an egg with a hammer.
    • Codecraft
      Codecraft about 12 years
      @cjc as long as I get at the yolky goodness...! Any suggestions on how I could clear open file handles without hammering my egg?
    • user1364702
      user1364702 about 12 years
      LSOF with sudo/root, then look and see what still has the files open that you deleted. Close or restart those processes. That will release the file handles.
    • cjc
      cjc about 12 years
      @Codecraft Basically what Bart said, though, if you know you're deleting the logs for process foo, it's a good bet that you'll also need to restart/HUP process foo. lsof will definitely show you everything, though.
  • Codecraft
    Codecraft about 12 years
    I didn't get to running your command, but what you said appeared to be bang on - I had been manually killing some logs-gone-crazy, and in the end, a reboot caused the disk space to be recalculated and show correctly.
  • Yvan
    Yvan over 9 years
    It worked for us with Apache logs filling up the /var/log/apache/ directory. So you may not have to restart your whole server, nor syslog, just the service you'll find in the output of the command above.
  • Nick
    Nick over 8 years
    Just had this ourselves. We had the tomcat6 catalina.out not get caught by logrotate, do we deleted it when it reached 4Gb and fixed logrotate. Weeks later we wondered why the 4Gb hadn't come back. That lsof command showed we had a lot of tomcat files pending deletion. Restarting tomcat and suddenly we have tonnes of space back!
  • sudo
    sudo about 7 years
    Finally an answer that worked for me. PostgreSQL had crashed and left open connections to 400GiB (!!!) of unlinked files on a 1TiB disk. Restarting Postgres fixed it.