How to Free Inode Usage?

577,957

Solution 1

It's quite easy for a disk to have a large number of inodes used even if the disk is not very full.

An inode is allocated to a file so, if you have gazillions of files, all 1 byte each, you'll run out of inodes long before you run out of disk.

It's also possible that deleting files will not reduce the inode count if the files have multiple hard links. As I said, inodes belong to the file, not the directory entry. If a file has two directory entries linked to it, deleting one will not free the inode.

Additionally, you can delete a directory entry but, if a running process still has the file open, the inode won't be freed.

My initial advice would be to delete all the files you can, then reboot the box to ensure no processes are left holding the files open.

If you do that and you still have a problem, let us know.

By the way, if you're looking for the directories that contain lots of files, this script may help:

#!/bin/bash
# count_em - count files in all subdirectories under current directory.
echo 'echo $(ls -a "$1" | wc -l) $1' >/tmp/count_em_$$
chmod 700 /tmp/count_em_$$
find . -mount -type d -print0 | xargs -0 -n1 /tmp/count_em_$$ | sort -n
rm -f /tmp/count_em_$$

Solution 2

If you are very unlucky you have used about 100% of all inodes and can't create the scipt. You can check this with df -ih.

Then this bash command may help you:

sudo find . -xdev -type f | cut -d "/" -f 2 | sort | uniq -c | sort -n

And yes, this will take time, but you can locate the directory with the most files.

Solution 3

My situation was that I was out of inodes and I had already deleted about everything I could.

$ df -i
Filesystem     Inodes  IUsed  IFree IUse% Mounted on
/dev/sda1      942080 507361     11  100% /

I am on an ubuntu 12.04LTS and could not remove the old linux kernels which took up about 400,000 inodes because apt was broken because of a missing package. And I couldn't install the new package because I was out of inodes so I was stuck.

I ended up deleting a few old linux kernels by hand to free up about 10,000 inodes

$ sudo rm -rf /usr/src/linux-headers-3.2.0-2*

This was enough to then let me install the missing package and fix my apt

$ sudo apt-get install linux-headers-3.2.0-76-generic-pae

and then remove the rest of the old linux kernels with apt

$ sudo apt-get autoremove

things are much better now

$ df -i
Filesystem     Inodes  IUsed  IFree IUse% Mounted on
/dev/sda1      942080 507361 434719   54% /

Solution 4

My solution:

Try to find if this is an inodes problem with:

df -ih

Try to find root folders with large inodes count:

for i in /*; do echo $i; find $i |wc -l; done

Try to find specific folders:

for i in /src/*; do echo $i; find $i |wc -l; done

If this is linux headers, try to remove oldest with:

sudo apt-get autoremove linux-headers-3.13.0-24

Personally I moved them to a mounted folder (because for me last command failed) and installed the latest with:

sudo apt-get autoremove -f

This solved my problem.

Solution 5

I had the same problem, fixed it by removing the directory sessions of php

rm -rf /var/lib/php/sessions/

It may be under /var/lib/php5 if you are using a older php version.

Recreate it with the following permission

mkdir /var/lib/php/sessions/ && chmod 1733 /var/lib/php/sessions/

Permission by default for directory on Debian showed drwx-wx-wt (1733)

Share:
577,957
Danf
Author by

Danf

Updated on July 10, 2022

Comments

  • Danf
    Danf almost 2 years

    I have a disk drive where the inode usage is 100% (using df -i command). However after deleting files substantially, the usage remains 100%.

    What's the correct way to do it then?

    How is it possible that a disk drive with less disk space usage can have higher Inode usage than disk drive with higher disk space usage?

    Is it possible if I zip lot of files would that reduce the used inode count?

  • SteMa
    SteMa almost 12 years
    that does the trick. my problem was to have an incredible amount of sessions in the /lib/php/sessions directory. maybe somebody has the same problem
  • normeus
    normeus almost 12 years
    Parallels plesk will not load, ftp not able to open a session, internet disk quota exceded (122) are some of the problems you'll get when you have reached the maximum number of inodes ( ~ Files) your service provider sets the max to as low as 20,000 inodes (~Files) even if you have UNLIMITED space.
  • mogsie
    mogsie over 11 years
    Someone should rewrite this find, cut, uniq sort into a single awk command!
  • Mikko Rantalainen
    Mikko Rantalainen over 11 years
    Sometimes it also helps to try to locate directories that take lots of space. For example, if you have mod_disk_cache enabled with Apache default configuration, you'll find that each directory below /var/cache/apache2/mod_disk_cache only has sensible amount of entries but the whole hierarchy eats all your inodes. Running du -hs * may give hints about places that take more space than you're expecting.
  • alxndr
    alxndr over 11 years
    @mogsie, would awk be able to handle the potentially millions of lines that find would return?
  • alxndr
    alxndr over 11 years
    Of course, the >/tmp/count_em_$$ will only work if you have space for it... if that's the case, see @simon's answer.
  • paxdiablo
    paxdiablo over 11 years
    @alxndr, that's why it's often a good idea to keep your file systems separate - that way, filling up something like /tmp won't affect your other file systems.
  • mogsie
    mogsie about 11 years
    @alxndr awk could keep a hash of the directory and the count of files without uniqing and sorting a gazillion lines. That said, perhaps here's an improvement: find . -maxdepth 1 -type d | grep -v '^\.$' | xargs -n 1 -i{} find {} -xdev -type f | cut -d "/" -f 2 | uniq -c | sort -n — this only sorts the last list.
  • Mikko Rantalainen
    Mikko Rantalainen about 11 years
    If you cannot create any files, even that can fail because sort may fail to keep everything in the memory and will try to automatically fall back to writing a temporary file. A process which would obviously fail...
  • Mohanraj
    Mohanraj about 11 years
    Your answer is perfectly suitable for "system will not remain use the file after reboot if that was deleted". But the question was asked is "how to reclaim or reuse the inodes after inode pointer is deleted?". Basically linux kernel create a new inode to a file whenever created, and also automatically do not reclaim the inode whenever you deleting a file.
  • J_McCaffrey
    J_McCaffrey over 10 years
    Thanks for this, this totally helped me out. I had a small VM 'run out of space', but really it was the inodes. At first I went around cleaning out large files, but it wasn't helping, then I ran your script and found a directory with 60k little files in it. I got rid of them and now I'm back in business. Thanks!
  • Frederick Nord
    Frederick Nord over 9 years
    sort failed for me, but I was able to give --buffer-size=10G which worked.
  • joystick
    joystick over 8 years
    In my case issue was SpamAssasin-Temp. find /var/spool/MailScanner/incoming/SpamAssassin-Temp -mtime +1 -print | xargs rm -f did the job :) Thanks!
  • Ashish Karpe
    Ashish Karpe over 8 years
    @paxdiablo you said "My initial advice would be to delete all the files you can, then reboot the box to ensure no processes are left holding the files open." but its prod server so can't reboot so how to free those inodes without reboot
  • paxdiablo
    paxdiablo over 8 years
    @AshishKarpe, I assume you're talking about your own situation since the OP made no mention of production servers. If you can't reboot immedaitely then there are two possibilities. First, hope that the processes in flight eventually close the current files so disk resources can be freed up. Second, even production servers should have scope for rebooting at some point - simply schedule some planned downtime or wait for the next window of downtime to come up.
  • Ashish Karpe
    Ashish Karpe over 8 years
    Found lots of small files were been created in /tmp which was eating up inodes so freed them using cmd "find /tmp -type f -mmin +100 -name "*" | perl -nle 'unlink;'" ..........Thanks
  • SinaOwolabi
    SinaOwolabi about 8 years
    So grateful for this post. I had a RHEL 6.3 server that had all partitions free but thanks to the count_em script I was able to see the inodes in the /var partition were all used up, thanks to some weird cache files filling up /var/lib/sss/db directory . All my applications including auditd, lvm were screaming no space left. Now on to the REAL problems... :-(
  • Michael Terry
    Michael Terry almost 8 years
    For me, this was taking hours. However, there's a simple solution: When the second command hangs on a particular directory, kill the current command and restart changing /* to whatever directory it was hanging on. I was able to drill down to the culprit <minute.
  • beldaz
    beldaz almost 8 years
    This was the closest to my own approach in a similar situation. It's worth noting that a more cautious approach is well documented at help.ubuntu.com/community/Lubuntu/Documentation/…
  • jarno
    jarno over 7 years
    I suppose you want ls -A instead of ls -a. Why would you want to count . and ..?
  • jarno
    jarno over 7 years
    @mogsie I used some gawk in my version. That also counts directories.
  • jarno
    jarno over 7 years
    @mogsie here is a version of your script that counts also directories and handles filenames containing newlines: find . -maxdepth 1 -not -path . -type d -print0 | xargs -0 -n 1 -I{} find {} -xdev -not -path {} -print0 | gawk 'BEGIN{RS="\0";FS="/";ORS="\0"}{print $2}' | uniq -cz | sort -nz. The gawk command could be replaced by grep -ozZ '\./[^/]*/' (Tested by GNU grep 2.25) Unfortunately cut does not handle null terminated lines.
  • Sibidharan
    Sibidharan almost 7 years
    Any idea why this happens?
  • tonysepia
    tonysepia over 6 years
    My case exactly! But had to use "sudo apt-get autoremove -f" to progress
  • Pacerier
    Pacerier over 6 years
    @FrederickNord, What's the error message when sort fails? How does it report failure?
  • Pacerier
    Pacerier over 6 years
    @SteMa, Doesn't the directory self-cleanup?
  • grim
    grim about 6 years
    @Sibidharan in my case it was because the PHP cron job to clear the old PHP sessions was not working.
  • Shadow
    Shadow almost 6 years
    rm -rf /var/lib/php/sessions/* would probably be a better command - it won't remove the session directory, just its contents... Then you don't have to worry about recreating it
  • Bohne
    Bohne over 5 years
    we love examples here at so ;)
  • Mars Lee
    Mars Lee over 5 years
    Is it safe to do this: sudo rm -rf /usr/src/linux-headers-3.2.0-2*, if I am sure I am not using that kernel?
  • Dominique Eav
    Dominique Eav over 5 years
    @MarsLee You can check which kernel is currently running with "uname -a"
  • mwfearnley
    mwfearnley over 5 years
    From what I can tell from the article/comments, this is faster than rm * for lots of files, due to expanding the wildcard and passing/processing each argument, but rm test/ is fine for deleting a test/ folder containing lots of files.
  • cscracker
    cscracker about 5 years
    I used this variant of your command in order to print the numbers on the same line: for i in /usr/src/*; do echo -en "$i\t"; find $i 2>/dev/null |wc -l; done
  • Mohit
    Mohit about 5 years
    I did not have php session but magento session issue, similar to this. Thanks for the direction.
  • Mark Stosberg
    Mark Stosberg almost 5 years
    This does not help to detect if "too many inodes" are the problem.
  • Mark Simon
    Mark Simon almost 5 years
    for i in /src/*; do echo "$i, `find $i |wc -l`"; done|sort -nrk 2|head -10 show off top 10 largest directory
  • Urda
    Urda almost 5 years
    This has nothing to do with Docker.
  • Morten Grum
    Morten Grum over 4 years
    Calling $ sudo apt-get autoremove alone, did the trick for me.
  • Admin
    Admin over 4 years
    php sessions should not clear via cron jobs , set session.gc_maxlifetime in php.ini php.net/manual/en/…
  • aecend
    aecend about 4 years
    Heads up, this works well, but make sure you set the permissions correctly on the blank directory! I didn't do this and inadvertently changed the permissions on my PHP sessions directory. Took two hours to figure out what I screwed up.
  • CodeMonkey
    CodeMonkey over 2 years
    Thank you, this told me where to look. Now I still had the issue that I couldn't delete the files, because I got "/bin/rm: Argument list too long", this could then be resolved with for i in * ; do rm $i ; done.
  • Nem
    Nem over 2 years
    I don't know why this solution is not even considered! You saved my day!
  • Wirat Leenavonganan
    Wirat Leenavonganan over 2 years
    Many thanks guy.. Save me the date..
  • bcag2
    bcag2 over 2 years
    @Urda I have similar issue on VM Ubuntu 18.04 with 9 containers. After down all container (one throw a timeout), df -i returns 86%, after re-up 5 mains containers (used in production), then df -i again, it returns 13% !
  • Bill.Zhuang
    Bill.Zhuang about 2 years
    it works, thanks for save my time.