ext4: Running out of inodes
Solution 1
Is there a way to solve this without creating and copying to a new partition
Nope, the number of inodes is fixed when the filesystem is created as the man page says
Be warned that it is not possible to expand the number of inodes on a filesystem after it is created, so be careful deciding the correct value for this parameter.
Solution 2
Mostly no, but in your case you have used LVM and there is a LV (Logical Volume) for home.
If you run pvdisplay
and look for "free extents" it may be possible run lvexpand
to increase the size of the home LV, and then run resize2fs
Downside is you'll only add inodes at the same rate as the current filesyste already has.
What you need to do is find which directory has a lot of files, and decide if you need them
A 0-byte file will use an inode.
$ ls -la /home
drwxr-xr-x 194 criggie criggie 28672 Sep 8 18:13 criggie
drwxr-xr-x 2 statler statler 4096 Dec 13 2015 statler
drwxr-xr-x 2 wakdorf waldorf 4096 Dec 21 2014 waldorf
Notice the 194 in the second column? This shows there are a lot of inodes in use in that directory. cd
into that directory and repeat.
I suspect you have a temp directory or something with many thousands of small files, which can likely be purged.
Related videos on Youtube
guettli
http://thomas-guettler.de/ Working out loud: https://github.com/guettli/wol
Updated on September 18, 2022Comments
-
guettli over 1 year
I am running out of inodes. Only 11% available:
the-foo:~ # df -i Filesystem Inodes IUsed IFree IUse% Mounted on /dev/mapper/system-home 9830400 8702297 1128103 89% /home
Is there a way to solve this without creating and copying to a new partition?
Details:
the-foo:~ # tune2fs -l /dev/mapper/system-home tune2fs 1.42.6 (21-Sep-2012) Filesystem volume name: <none> Last mounted on: /home Filesystem UUID: 55899b65-15af-437d-ac56-d323c702f305 Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: user_xattr acl Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 9830400 Block count: 39321600 Reserved block count: 1966080 Free blocks: 22958937 Free inodes: 2706313 First block: 0 Block size: 4096 Fragment size: 4096 Reserved GDT blocks: 1014 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 8192 Inode blocks per group: 512 Flex block group size: 16 Filesystem created: Tue Jul 8 08:02:22 2014 Last mount time: Sun Apr 24 22:33:00 2016 Last write time: Thu Sep 8 09:18:01 2016 Mount count: 11 Maximum mount count: 10 Last checked: Tue Jul 8 08:02:22 2014 Check interval: 0 (<none>) Lifetime writes: 349 GB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 First orphan inode: 2759586 Default directory hash: half_md4 Directory Hash Seed: e4402d28-9b15-46e2-9521-f0e25dfb58d0 Journal backup: inode blocks
Please let me know if more details are needed.
-
glglgl over 7 years@neutrinus No. That's about the size of an inode (256 vs. 128 bytes), not about the number of inodes.
-
neutrinus over 7 years@glglgl sure, but they also provide a solution to do it "in place" - by resizing two partitions
-
scai over 7 yearsSince increasing the number of inodes is not possible: Try to understand why you are running out of inodes. Maybe you have a directory full of small files that just needs to be flushed from time to time. Alternatively check if putting these files into an archive is an option for you.
-
-
ilkkachu over 7 yearsNo, the second column of 'ls -l' is the number of (hard) links to that file. For a directory, it's the count of subdirectories + 2. (each subdirectory of D has the
..
entry referring to D, also plus one for the directory's own.
entry plus one for its actual name.) -
Criggie over 7 years@ilkkachu OK thanks for that, you are correct, Still its an indicator of which directories have a lot of files inside them, and are worth checking for large numbers of small files.
-
Kevin over 7 years@Criggie: No. It is an indicator of which directories have a lot of immediate subdirectories. if
foo/
contains 10,000 files but no subdirectories, it will have a link count of two. Ifbar/
contains one subdirectorybar/baz/
, which in turn contains 10,000 subdirectories,bar/
will have a link count of 3. The link count does not provide the information you are attempting to divine. -
ilkkachu over 7 yearsIn any case, you have to use something like
find -size -8192c
to find files smaller than, say 8 kB (half the default bytes per inode)