XFS vs Ext4 vs Others - which file system is stable, reliable, for long run such as 24/7 case
Solution 1
- XFS was more fragile, but the issue seems to be fixed.
- XFS was surely a slow-FS on metadata operations, but it has been fixed recently as well.
- EXT4 is still getting quite critical fixes as it follows from commits at kernel.org's git.
- "EXT4 does not support concurrent writes, XFS does"
- (But) EXT4 is more "mainline"
So, the final answer depends on your precise requirements (as usual).
Solution 2
The choice of filesystem makes a difference in certain cases. You should check if your particular use cases are affected by filesystem choice.
For the three very generic bullet points you list, it makes no difference whether you use ext4 or xfs.
If you had a requirement where you wanted to use files larger than 16 TB, you will have to use XFS. (ext 4 will soon have >16TB but not yet)
Solution 3
ZFS is the only choice for reliability.
Its one drawback is that it doesn't like RAID controllers, as it handles its own redundancy, so you have to use JBOD which may disable caching on some RAID controllers (example: 3ware), or single drive volumes.
EXT4 has a 16 TiB limit, unless running on a 64 bit Linux system, and the EXT4 volume was created with the "64bit" feature flag which enlarges the inodes.
Solution 4
EXT4 can be [still] VERY unstable and buggy, it's very new. When compared to XFS, which is very stable and proven over years, it has not much to offer. PS I've experienced bugs with EXT4 myself. It either frozen the whole system during copy operations, or it just lost my data.
Related videos on Youtube
![Admin](/assets/logo_square_200-5d0d61d6853298bd2a4fe063103715b4daf2819fc21225efa21dfb93e61952ea.png)
Admin
Updated on September 18, 2022Comments
-
Admin almost 2 years
XFS and Ext4 file system which one is really stable and reliable for long run with heavy disk write and read?
- the system will be used in a place where 24/7 is in service, and every second there is read and write in the disk
- system need to be 99.95 % uptime for about 1 year run
- system need to be maximum downtime in year for about 20 hours maximum
Which file-system is the best choice for such challenge? ( i wanted to use Solaris or FreeBSD but for my project i must have to use Ubuntu or ArchLinux or Fedora or CentOS).
But confused with which file system to choose.
-
tshepang about 12 yearsUse the default one provided by the installer, though I strongly believe any other available on the selection menu ought to be good enough, provided you are using a stable release.
-
psusi about 12 yearsPopycock. Ext has been handling concurrent writes quite well since the dawn of Linux.
-
poige about 12 years@psusi, check out the link, it's free.
-
psusi about 12 yearsI have; it's popycock. Two writers is never going to have higher aggregate throughput than one, unless something is dreadfully wrong with your setup. The best case is to not have any lower aggregate throughput. This is something that ext has been pretty good at staying close to for 20 years. That is not to say that XFS is bad at it, just that ext has been doing this just fine since long before xfs was first thought up.
-
poige about 12 years@psusi, Well, it quite can turn out then that dreadfully wrong is having a RAID. ;-)
-
psusi about 12 yearsNot unless there is something very wrong with the raid... a single sequential write is going to give the highest sustained throughput. Multiple writers means more seeking, which means more time spent NOT writing.
-
Erik Aronesty almost 11 yearsUnless your'e talking multiple NFS writers. Which for a large storage device, you almost certainly are.
-
poige over 6 yearsthere's no stable ZFS for Linux ;-P