EXT4-fs errors and sd errors. Rejecting I/O to offline device
13,756
Solved by using contact cleaner on the HDDs, cables and plugs.
Related videos on Youtube
Author by
eliongater
Updated on September 18, 2022Comments
-
eliongater over 1 year
System Configuration:
- Ubuntu 12.04 server 64bit
- HP Proliant dl140 g3
- 32gb of RAM
- Intel X5365 quad core 3GHz
- LSI SAS1068
- 2x WD black 2TB SATA HDDs in RAID 1
Errors:
[ 2044.430919] EXT4-fs (sda1): previous I/O error to superblock detected [ 2044.436018] sd 4:1:1:0: rejecting I/O to offline device [ 2044.441364] EXT4-fs error (device sda1): __ext4_get_inode_loc:3683: inode #86663229: block 346555939: comm du: unable to read itable block [ 2044.452399] sd 4:1:1:0: rejecting I/O to offline device [ 2044.457844] sd 4:1:1:0: rejecting I/O to offline device [ 2044.462877] sd 4:1:1:0: rejecting I/O to offline device [ 2044.467606] sd 4:1:1:0: rejecting I/O to offline device [ 2044.472357] sd 4:1:1:0: rejecting I/O to offline device [ 2044.477110] sd 4:1:1:0: rejecting I/O to offline device [ 2044.481844] sd 4:1:1:0: rejecting I/O to offline device [ 2044.486613] sd 4:1:1:0: rejecting I/O to offline device [ 2044.491372] sd 4:1:1:0: rejecting I/O to offline device [ 2044.496131] sd 4:1:1:0: rejecting I/O to offline device [ 2044.500872] EXT4-fs (sda1): previous I/O error to superblock detected [ 2044.506010] sd 4:1:1:0: rejecting I/O to offline device [ 2044.511376] EXT4-fs error (device sda1): __ext4_get_inode_loc:3683: inode #86671461: block 346556454: comm du: unable to read itable block [ 2044.339461] block 346037795: comm du: unable to read itable block [ 2044.339663] sd 4:1:1:0: rejecting I/O to offline device [ 504.854978] EXT4-fs error (device sda1) in ext4_reserve_inode_write:4507: Journal has aborted [ 504.826169] EXT4-fs error (device sda1) in ext4_orphan_del:2111: Journal has aborted
After these errors the file system goes read only and nothing works properly after that. I can't even reboot it sometimes. I have tried to remount the filesystem, but that doesn't work. Only rebooting has fixed it (temporarily).
After looking through the logs I found these that might help:
[ 15.846694] sd 4:1:1:0: [sda] Write Protect is off [ 15.846706] sd 4:1:1:0: [sda] Mode Sense: 03 00 00 08 [ 15.846918] sd 4:1:1:0: [sda] No Caching mode page found [ 15.846929] sd 4:1:1:0: [sda] Assuming drive cache: write through [ 15.847731] sd 4:1:1:0: [sda] No Caching mode page found [ 15.847736] sd 4:1:1:0: [sda] Assuming drive cache: write through [ 15.876479] sda: sda1 sda2 < sda5 > [ 15.877403] sd 4:1:1:0: [sda] No Caching mode page found [ 15.877409] sd 4:1:1:0: [sda] Assuming drive cache: write through [ 15.877414] sd 4:1:1:0: [sda] Attached SCSI disk [ 37.460186] EXT4-fs (sda1): INFO: recovery required on readonly filesystem [ 37.460192] EXT4-fs (sda1): write access will be enabled during recovery [ 38.929477] EXT4-fs (sda1): orphan cleanup on readonly fs [ 38.929497] EXT4-fs (sda1): ext4_orphan_cleanup: deleting unreferenced inode 86523973 [ 38.930128] EXT4-fs (sda1): ext4_orphan_cleanup: deleting unreferenced inode 13572042 [ 38.945392] EXT4-fs (sda1): ext4_orphan_cleanup: deleting unreferenced inode 110888306 [ 38.964164] EXT4-fs (sda1): ext4_orphan_cleanup: deleting unreferenced inode 110890639 [ 38.964195] EXT4-fs (sda1): ext4_orphan_cleanup: deleting unreferenced inode 73400330 [ 38.964208] EXT4-fs (sda1): ext4_orphan_cleanup: deleting unreferenced inode 73400329 [ 38.964218] EXT4-fs (sda1): ext4_orphan_cleanup: deleting unreferenced inode 73400328 [ 38.964228] EXT4-fs (sda1): ext4_orphan_cleanup: deleting unreferenced inode 73400327 [ 38.964237] EXT4-fs (sda1): ext4_orphan_cleanup: deleting unreferenced inode 73400323 [ 38.964245] EXT4-fs (sda1): 9 orphan inodes deleted [ 38.964250] EXT4-fs (sda1): recovery complete [ 39.261874] EXT4-fs (sda1): mounted filesystem with ordered data mode. Opts: (null) [ 54.271669] Adding 4190204k swap on /dev/sda5. Priority:-1 extents:1 across:4190204k [ 54.345011] EXT4-fs (sda1): re-mounted. Opts: errors=remount-ro [259523.631903] EXT4-fs (sda1): Unaligned AIO/DIO on inode 13372685 by AioMgr0-N; performance will be poor.
I have tried the following:
- Memtest. 2 successful passes.
- LSI configuration. Array is reported as optimal and predicted failure is no for both drives.
fsck -M -fy /dev/sdb1
from a live USB of 14.10 appeared to be fine.
Any ideas/help would be greatly appreciated.
-
David Foerster over 9 yearsYour RAID seems to be degraded since the kernel sees one physical drive as offline. Can you provide more info on the RAID and drive status?
-
eliongater over 9 yearsWhat info would you like on them? The drives are connected to the LSI card and then appear as a single drive to the OS.