How can I optimise ext4 for reliability?
Solution 1
No. You can never suppose something to be 100% reliable.
Journaling file systems minimise data loss in the event of an unexpected outage. Extents and barriers help even more, but cannot eliminate all associated problems. Personally, I've never experienced data loss because of file system corruption when using journaling file systems.
Also, journaling is not disabled by default.
Here's a good overview of ext4 and its improvements: http://kernelnewbies.org/Ext4
Solution 2
You could disable delayed allocation under ext4 (nodelalloc), that would make it significantly more likely that you would recover more data if/when you did suffer a power out during a write, but it would come at the cost of more fragmentation of the file system over time.
Related videos on Youtube
Comments
-
amin over 1 year
As ext4 was introduced as more reliable than ext3 with block journals, is there any chance to suppose it 100% reliable? What if enabling block journaling on it, which is disabled by default?
As friend's guide to explain my case in more detail: I have an embedded linux device, after installation keyboard and monitor is detached and it works standalone.
My duty is to make sure it has reliable file-system so with errors there is no way for manual correct faults on device. I can't force my customer to use a ups with each device to ensure no fault by power-failure.
What more can ext4 offer me besides block journaling?
Thanks in advance.
-
Lekensteyn about 13 years+1 for "you can never suppose something to be 100% reliable"
-
amin about 13 yearsas Comparison_of_file_systems block journaling is off while metadata journaling is on, that's trade off between reliability and speed
-
user239558 about 10 yearsI just had a server reboot in order to find massive data corruption on ext4 where files contain invalid data. This could not have happened on zfs or btrfs because data has checksums.