Understanding XFS inode limits
Solution 1
While you increased the maximum space inodes can use, you probably have too little free space to support over >1.5B inodes. As each inodes uses about 512 bytes, I estimate you have ~750 GB of free/available space.
Try freeing some space and/or expanding you filesystem.
Solution 2
Believe it or not, as configured your volume is too small for what you're trying to accomplish. The minimum inode size for XFS when using cyclic-redundancy-checks (which you are, and is default) is 512 bytes. 2 billion symlinks, 512b per link, means you need at least 953 GB or 1TB (depending on your definition of billion) to store all of those inodes.
If you have the ability to reformat that volume, you can make it suit your purposes better.
-i maxpct=90
during the mkfs process will create a new volume that can use 90% of space as inodes. It won't fit your hardware, but if you can change how much you can throw at this, it'll let xfs use nearly all the space to track symlinks.-m crc=0 -i size=256
during the mkfs process will disable CRC support and halve your inode size. This option would fit your current hardware at the cost of not defending against hardware issues.
Related videos on Youtube
Synesso
I first started programming BASIC and rudimentary machine language on my Commodore 16 in the 1980s. These days I work remotely from the beautiful Gold Coast in several different languages - mainly Scala, Rust & Go. My passions are Jazz, Coffee and learning 日本語. If I have been helpful, please thank me by buying a coffee for a stranger.
Updated on September 18, 2022Comments
-
Synesso over 1 year
I reached the inode limit on my XFS partition. There are plenty of questions about this. Some suggest the answer is to increase the maximum percentage of space allocated for inodes. Or, as the
xfs_growfs
manpage puts it:-m Specify a new value for the maximum percentage of space in the filesystem that can be allocated as inodes.
I tried this, but I'm not sure what I'm seeing. The default was 25% when I hit the limit. That gave me 409,600,129 inodes on a 781GB disk image.
I increased it to 100%, and it now will allow me 1,566,536,296 inodes.
$ df -i Filesystem Inodes IUsed IFree IUse% Mounted on ... /dev/loop3 1566536040 409600129 1156935911 27% /mnt/tiles
I expect to write over 2 billion entries (mostly symlinks), so even at 100% it is not sufficient. It was my understanding that XFS can support much greater quantities of files, so I think I'm missing something.
I tried remounting with
-o inode64
, but there was no difference. (This option should be default anyway).$ sudo xfs_growfs -m 100 . meta-data=/dev/loop3 isize=512 agcount=4, agsize=51200000 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=1 spinodes=0 rmapbt=0 = reflink=1 data = bsize=4096 blocks=204800000, imaxpct=100 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal bsize=4096 blocks=100000, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 inode max pct unchanged, skipping
Is the size of my drive insufficient? Or is there some other configuration or limitation I'm unaware of? Why is 100% allocation for inodes only equal to 1.5GB?
-
shodanshok over 4 years+1 for the good answer, but be aware than a 256 byte inode has much less space for embedded ACLs and extended attributes. If using a SELinux heavy OS (as RHEL or CentOS) the performance impact can be significant.