du vs. df difference

52,997

Solution 1

Ok, found it.

I had a old backup on /mnt/Backup in the same filesystem and then an external drive mounted in that place. So du didn't see the files. So cleaning up this gave me back my disk space.

It probably happened this way: the external drive once was unmounted while the daily backup script run.

Solution 2

I don't think you will find a more thorough explanation that then this link for all the reasons it could be off. Some highlights that might help:

  • What is your inode usage, if it is almost at 100% that can mess things up:

    df -i

  • What is your block size? Lots of small files and a large block size could skew it quite a bit.

    sudo tune2fs -l /dev/sda1 | grep 'Block size'

  • Deleted files, you said you investigated this, but to get the total space you could use the following pipeline (I like find instead of lsof just because lsof is a pain to parse):

    sudo find /proc/*/fd -printf "%l\t%s\n" | grep deleted | cut -f2 | (tr '\n' +; echo 0) | bc

However, that is almost 2x off. Run fsck on the partition while it is unmounted to be safe.

Solution 3

It looks like a case of files being removed while processes still have them open. This disconnect happens because the du command totals up space of files that exist in the file system, while df shows blocks available in the file system. The blocks of an open and deleted file are not freed until that file is closed.

You can find what processes have open but deleted files by examining /proc

find /proc/*/fd -ls | grep deleted

Solution 4

The most likely reason in your case is that you have lots of files that are very small (smaller than your block size on the drive). In that case df will report the sum of all used blocks, whereas du will report the actual sum of file sizes.

Solution 5

I agree that

lsof +L 1 /home | grep -i deleted

is a good place to start, in my case I notice that I had lots of perl scripts that was running, and keeping a lot of files alive, even though they was supposed to be deleted.

I killed the perl functions, and this made du and df almost identical, case closed.

Share:
52,997

Related videos on Youtube

Andreas Kuntzagk
Author by

Andreas Kuntzagk

Updated on September 17, 2022

Comments

  • Andreas Kuntzagk
    Andreas Kuntzagk over 1 year

    I have a fileserver where df reports 94% of / full. But according to du, much less is used:

    # df -h /
    Filesystem            Size  Used Avail Use% Mounted on
    /dev/sda3             270G  240G   17G  94% /
    # du -hxs /
    124G    /
    

    I read that open but deleted files could be responsible for it but a reboot did not fix this.

    This is Linux, ext3.

    regards

    • EricMinick
      EricMinick over 14 years
      Combine @TCampbell and @Kyle Brandt's answers - reboot and if that doesn't fix it, boot from a rescue CD and run fsck on the unmounted partition.
    • Andreas Kuntzagk
      Andreas Kuntzagk over 14 years
      I already rebooted before. Extensive fsck running right now.
    • Chris
      Chris over 5 years
      This apparently unwelcome "duplicate question" has MUCH better answers as the "original" it points to :)
    • rajeev
      rajeev almost 5 years
      2 scenarios:[1] df shows more than du: say your block size is 1Kb, and you have three files: 100B, 200B, 500B. They will occupy 3 blocks. DF will report 3K used, du will report 800Bytes used. -----second-case [2]: you have 100 1KB blocks in total and all have 300B files written on them. Then DF will report 100% used, but du will report 30KB used. ------There is third scenario where DU reports much higher then DF. This will be either "deleted but open" files //OR// it will be a hidden mount which has all those files !!!
  • Andreas Kuntzagk
    Andreas Kuntzagk over 14 years
    I already investigated this problem (see original post) btw. you can also get that by lsof| grep deleted
  • Andreas Kuntzagk
    Andreas Kuntzagk over 14 years
    df -i does not report anything unusual, will go the fsck way now.
  • asdmin
    asdmin over 14 years
    quite many ~blocksize/2 size files can make this problem by filling up entire blocks, creating enormously big unavailable space (in the unavailable part of remaining space in the blocks) so do you store lots of small files there?
  • sleske
    sleske over 14 years
    That is true, but seems beside the point. The "in use" size reported by du and df differs, and that is independent of reserved blocks.
  • Alex
    Alex over 14 years
    Yeah, it still doesn't add up, but I figured that was adding to the discrepancies.
  • Andreas Kuntzagk
    Andreas Kuntzagk over 14 years
    How would I find out the number of files with such a size?
  • sleske
    sleske over 14 years
    Interesting, didn't think of that. Mounting a fs on a non-empty directory can do funny things...
  • Kyle Brandt
    Kyle Brandt over 14 years
    First you need to find you block size, so if it 4096, You want a file less then 4KB, so find / -size -4k | wc -l
  • DisabledLeopard
    DisabledLeopard over 14 years
    df can show you the status of inodes for the disk or partition too which may shed some more light as to whether or not this is your situation - df -i /
  • Andreas Kuntzagk
    Andreas Kuntzagk over 14 years
    Youre right Kyle. I totally missed that in this long page.
  • Kyle Brandt
    Kyle Brandt over 14 years
    Andreas, it also doesn't make it that clear, I didn't think of it either.
  • ericslaw
    ericslaw over 14 years
    Used to fill up volumes with several 100 thousand 3k files... changing filesystem from xfs (on sgi) to reiserfs helped make diskspace more efficient. Not an option for many, but worked for us.
  • Admin
    Admin over 14 years
    chmod mountpoints to 000, so you get errors from scripts instead of them silently filling your root partition
  • sourcejedi
    sourcejedi almost 11 years
    @wolfgangsz Not exactly, unless you use du --apparent.
  • Dawid Jurczak
    Dawid Jurczak about 9 years
    If you're using ext to store lots of small files and are running out of inodes before you run out of blocks, then create your filesystems with mke2fs -i [NUM]. This flag is "bytes per inode", and if you make it equal your block size then you will always have enough inodes. But you'll have to experiment with the value to see what maximizes use of your space.
  • mkll
    mkll over 7 years
    In my case the size of used space shown by du and df was comparable and there was a lack of 23-24GB. Setting reserved blocks number to 1% freed those 23GB. Thanks!
  • Naveed Abbas
    Naveed Abbas over 4 years
    @user1686 Lets see: mkdir x && chmod 000 x && date > x/a Nope, no warnings for root user. Backup scripts usually run as root. Actually, the trick is to mount -t devpts devpts x underneath your actual mount - it's always readonly, even for root.