mdadm raid doesn't mount
Your arrays are not properly started. Remove them from your running config with this:
mdadm --stop /dev/md12[567]
Now try using the autoscan and assemble feature.
mdadm --assemble --scan
Assuming that works, save your config (assuming Debian derivative) with (and this will overwrite your config so we make a backup first):
mv /etc/mdadm/mdadm.conf /etc/mdadm/mdadm.conf.old
/usr/share/mdadm/mkconf > /etc/mdadm/mdadm.conf
You should be fixed for a reboot now, and it will auto assemble and start every time.
If not, give the output of:
mdadm --examine /dev/sd[bc]6 /dev/sd[bc]7
It'll be a bit long but shows everything you need to know about the arrays and the member disks of the arrays, their state, etc.
Just as an aside, it normally works better if you don't create multiple raid arrays on a disk (ie, /dev/sd[bc]6 and /dev/sd[bc]7) separately. Rather, create only one array, and you can then create partitions on your array if you must. LVM is a much better way to partition your array most of the time.
Related videos on Youtube
![stdcerr](https://i.stack.imgur.com/Y3RU6.png?s=256&g=1)
stdcerr
Coding mostly on embedded systems (24/7/365 uptime) to make a living at daytime, coding on a variety of systems (x86, arm) for fun at nighttime, preferences: Linux, C & C++. Always interested in learning other ways/better ways to solve problems!
Updated on September 18, 2022Comments
-
stdcerr almost 2 years
I have a raid array defined in
/etc/mdadm.conf
like this:ARRAY /dev/md0 devices=/dev/sdb6,/dev/sdc6 ARRAY /dev/md1 devices=/dev/sdb7,/dev/sdc7
but when I try to mount them, I get this:
# mount /dev/md0 /mnt/media/ mount: special device /dev/md0 does not exist # mount /dev/md1 /mnt/data mount: special device /dev/md1 does not exist
/proc/mdstat
meanwhile says:# cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md125 : inactive dm-6[0](S) 238340224 blocks md126 : inactive dm-5[0](S) 244139648 blocks md127 : inactive dm-3[0](S) 390628416 blocks unused devices: <none>
So I tried this:
# mount /dev/md126 /mnt/data mount: /dev/md126: can't read superblock # mount /dev/md125 /mnt/media mount: /dev/md125: can't read superblock
The fs on the partions is
ext3
and when I specify the fs with-t
, I getmount: wrong fs type, bad option, bad superblock on /dev/md126, missing codepage or helper program, or other error (could this be the IDE device where you in fact use ide-scsi so that sr0 or sda or so is needed?) In some cases useful info is found in syslog - try dmesg | tail or so
How can I get my raid arrays mounted? It's worked before.
EDIT 1
# mdadm --detail --scan mdadm: cannot open /dev/md/127_0: No such file or directory mdadm: cannot open /dev/md/0_0: No such file or directory mdadm: cannot open /dev/md/1_0: No such file or directory
EDIT 2
# dmsetup ls isw_cabciecjfi_Raid7 (252:6) isw_cabciecjfi_Raid6 (252:5) isw_cabciecjfi_Raid5 (252:4) isw_cabciecjfi_Raid3 (252:3) isw_cabciecjfi_Raid2 (252:2) isw_cabciecjfi_Raid1 (252:1) isw_cabciecjfi_Raid (252:0) # dmsetup table isw_cabciecjfi_Raid7: 0 476680617 linear 252:0 1464854958 isw_cabciecjfi_Raid6: 0 488279484 linear 252:0 976575411 isw_cabciecjfi_Raid5: 0 11968362 linear 252:0 1941535638 isw_cabciecjfi_Raid3: 0 781257015 linear 252:0 195318270 isw_cabciecjfi_Raid2: 0 976928715 linear 252:0 976575285 isw_cabciecjfi_Raid1: 0 195318207 linear 252:0 63 isw_cabciecjfi_Raid: 0 1953519616 mirror core 2 131072 nosync 2 8:32 0 8:16 0 1 handle_errors
EDIT 3
# file -s -L /dev/mapper/* /dev/mapper/control: ERROR: cannot read `/dev/mapper/control' (Invalid argument) /dev/mapper/isw_cabciecjfi_Raid: x86 boot sector /dev/mapper/isw_cabciecjfi_Raid1: Linux rev 1.0 ext4 filesystem data, UUID=a8d48d53-fd68-40d8-8dd5-3cecabad6e7a (needs journal recovery) (extents) (large files) (huge files) /dev/mapper/isw_cabciecjfi_Raid3: Linux rev 1.0 ext4 filesystem data, UUID=3cb24366-b9c8-4e68-ad7b-22449668f047 (extents) (large files) (huge files) /dev/mapper/isw_cabciecjfi_Raid5: Linux/i386 swap file (new style), version 1 (4K pages), size 1496044 pages, no label, UUID=f07e031f-368a-443e-a21c-77fa27adf795 /dev/mapper/isw_cabciecjfi_Raid6: Linux rev 1.0 ext3 filesystem data, UUID=0f0b401a-f238-4b20-9b2a-79cba56dd9d0 (large files) /dev/mapper/isw_cabciecjfi_Raid7: Linux rev 1.0 ext3 filesystem data, UUID=b2d66029-eeb9-4e4a-952c-0a3bd0696159 (large files) #
Also when I have one additional disk
/dev/mapper/isw_cabciecjfi_Raid
in my system - I tried to mount a partition but got:# mount /dev/mapper/isw_cabciecjfi_Raid6 /mnt/media mount: unknown filesystem type 'linux_raid_member'
I rebooted and confirmed that RAID is turned of in my
BIOS
.I tried to force a mount which seems to allow me to mount but the content of the partition is inaccessible sio it still doesn't work as expected: # mount -ft ext3 /dev/mapper/isw_cabciecjfi_Raid6 /mnt/media # ls -l /mnt/media/ total 0 # mount -ft ext3 /dev/mapper/isw_cabciecjfi_Raid /mnt/data # ls -l /mnt/data total 0
EDIT 4
After executing suggested commands, I only get:
$ sudo mdadm --examine /dev/sd[bc]6 /dev/sd[bc]7 mdadm: cannot open /dev/sd[bc]6: No such file or directory mdadm: cannot open /dev/sd[bc]7: No such file or directory
EDIT 5
I got
/dev/md127
mounted now but/dev/md0
and/dev/md1
are still not accessible:# mdadm --examine /dev/sd[bc]6 /dev/sd[bc]7 mdadm: cannot open /dev/sd[bc]6: No such file or directory mdadm: cannot open /dev/sd[bc]7: No such file or directory root@regDesktopHome:~# mdadm --stop /dev/md12[567] mdadm: stopped /dev/md127 root@regDesktopHome:~# mdadm --assemble --scan mdadm: /dev/md127 has been started with 1 drive (out of 2). root@regDesktopHome:~# cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md127 : active raid1 dm-3[0] 390628416 blocks [2/1] [U_] md1 : inactive dm-6[0](S) 238340224 blocks md0 : inactive dm-5[0](S) 244139648 blocks unused devices: <none> root@regDesktopHome:~# ls -l /dev/mapper total 0 crw------- 1 root root 10, 236 Aug 13 22:43 control brw-rw---- 1 root disk 252, 0 Aug 13 22:43 isw_cabciecjfi_Raid brw------- 1 root root 252, 1 Aug 13 22:43 isw_cabciecjfi_Raid1 brw------- 1 root root 252, 2 Aug 13 22:43 isw_cabciecjfi_Raid2 brw------- 1 root root 252, 3 Aug 13 22:43 isw_cabciecjfi_Raid3 brw------- 1 root root 252, 4 Aug 13 22:43 isw_cabciecjfi_Raid5 brw------- 1 root root 252, 5 Aug 13 22:43 isw_cabciecjfi_Raid6 brw------- 1 root root 252, 6 Aug 13 22:43 isw_cabciecjfi_Raid7 root@regDesktopHome:~# mdadm --examine mdadm: No devices to examine root@regDesktopHome:~# cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md127 : active raid1 dm-3[0] 390628416 blocks [2/1] [U_] md1 : inactive dm-6[0](S) 238340224 blocks md0 : inactive dm-5[0](S) 244139648 blocks unused devices: <none> root@regDesktopHome:~# mdadm --examine /dev/dm-[356] /dev/dm-3: Magic : a92b4efc Version : 0.90.00 UUID : 124cd4a5:2965955f:cd707cc0:bc3f8165 Creation Time : Tue Sep 1 18:50:36 2009 Raid Level : raid1 Used Dev Size : 390628416 (372.53 GiB 400.00 GB) Array Size : 390628416 (372.53 GiB 400.00 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 127 Update Time : Sat May 31 18:52:12 2014 State : active Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Checksum : 23fe942e - correct Events : 167 Number Major Minor RaidDevice State this 0 8 35 0 active sync 0 0 8 35 0 active sync 1 1 8 19 1 active sync /dev/dm-5: Magic : a92b4efc Version : 0.90.00 UUID : 91e560f1:4e51d8eb:cd707cc0:bc3f8165 Creation Time : Tue Sep 1 19:15:33 2009 Raid Level : raid1 Used Dev Size : 244139648 (232.83 GiB 250.00 GB) Array Size : 244139648 (232.83 GiB 250.00 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 0 Update Time : Fri May 9 21:48:44 2014 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Checksum : bfad9d61 - correct Events : 75007 Number Major Minor RaidDevice State this 0 8 38 0 active sync 0 0 8 38 0 active sync 1 1 8 22 1 active sync /dev/dm-6: Magic : a92b4efc Version : 0.90.00 UUID : 0abe503f:401d8d09:cd707cc0:bc3f8165 Creation Time : Tue Sep 8 21:19:15 2009 Raid Level : raid1 Used Dev Size : 238340224 (227.30 GiB 244.06 GB) Array Size : 238340224 (227.30 GiB 244.06 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 1 Update Time : Fri May 9 21:48:44 2014 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Checksum : 2a7a125f - correct Events : 3973383 Number Major Minor RaidDevice State this 0 8 39 0 active sync 0 0 8 39 0 active sync 1 1 8 23 1 active sync root@regDesktopHome:~#
EDIT 6
I stopped them with
mdadm --stop /dev/md[01]
and confirmed that/proc/mdstat
wouldn't show them anymore, then executedmdadm --asseble --scan
and got# mdadm --assemble --scan mdadm: /dev/md0 has been started with 1 drives. mdadm: /dev/md1 has been started with 2 drives.
but if I try to mount either of the arrays, I still get:
root@regDesktopHome:~# mount /dev/md1 /mnt/data mount: wrong fs type, bad option, bad superblock on /dev/md1, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so
In the meantime, I've figured out that my superblocks seem to be damaged (PS I have confirmed with
tune2fs
andfdisk
that I'm dealing with anext3
partition):root@regDesktopHome:~# e2fsck /dev/md1 e2fsck 1.42.9 (4-Feb-2014) The filesystem size (according to the superblock) is 59585077 blocks The physical size of the device is 59585056 blocks Either the superblock or the partition table is likely to be corrupt! Abort<y>? yes root@regDesktopHome:~# e2fsck /dev/md0 e2fsck 1.42.9 (4-Feb-2014) The filesystem size (according to the superblock) is 61034935 blocks The physical size of the device is 61034912 blocks Either the superblock or the partition table is likely to be corrupt! Abort<y>? yes
But both partitions have some super blocks backed up:
root@regDesktopHome:~# mke2fs -n /dev/md0 mke2fs 1.42.9 (4-Feb-2014) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=0 blocks, Stripe width=0 blocks 15261696 inodes, 61034912 blocks 3051745 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=4294967296 1863 block groups 32768 blocks per group, 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 265408, 4096000, 7962624, 11239424, 20480000, 23887872 root@regDesktopHome:~# mke2fs -n /dev/md1 mke2fs 1.42.9 (4-Feb-2014) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=0 blocks, Stripe width=0 blocks 14901248 inodes, 59585056 blocks 2979252 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=4294967296 1819 block groups 32768 blocks per group, 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000, 23887872
What do you think, should I try to restore the backup on both arrays to
23887872
? I think I could do that withe2fsck -b 23887872 /dev/md[01]
do you recommend giving this a shot?
I don't necessarily want to try around with something I don't exactly know abd that might destroy the data on my disks...man e2fsck
doesn't necessarily say it's dangerous but there might be another, saver way to fix the superblock...?
AS A LAST UPDATE TO THE COMMUNITY,
I used
resize2fs
to get my superblocks back in order and my drives mounted again! (resize2fs /dev/md0
&resize2fs /dev/md1
got my back up!) Long story but it finally worked out! And I learned a lot in terms ofmdadm
along the way! Thank you @IanMacintosh-
frostschutz almost 10 yearsYour
mdadm.conf
is odd. Normally it hasUUID=some:thing:ran:dom
instead ofdevices=
. What's the output ofmdadm --detail --scan
? -
frostschutz almost 10 yearsAlso those
md12[567]
you have are on device mapper devices. Trydmsetup ls
ordmsetup table
to see what they are, maybe something LVM related? -
stdcerr almost 10 years@frostschutz I don't understand what you mean by "Your mdadm.conf is odd.", can you clarify please - See EDIT 1 for output from
mdadm --detail --scan
- Thanks! -
stdcerr almost 10 years@frostschutz EDIT 2 for the results of
dmsetup
- to be honest, I can not remember if they are onLVM
or not, I set them up like 5 years ago or so and they always used to run fine - until I updated to Kubuntu 14.04 and my raid broke... that's where I'm at now... -
frostschutz almost 10 yearsLooks like fakeraid with
/dev/sdb
and/dev/sdc
. Tryfile -s -L /dev/mapper/*
if there is anything of use. -
stdcerr almost 10 years@frostschutz The RAID setup in the
BIOS
wasn't recognized by Linux initially that's why I setup my raid withmdadm
- do you think that might be the problem here? How did you find out about thefakeraid
? Please see/dev/mapper
output above in EDIT 3. Thanks! -
Ian Macintosh almost 10 yearsInteresting dual problem. Glad it's sorted! Re the raid setup though I would look to ensure there is only 1 raid array per physical device if possible. I prefer to use LVM to subsequently 'partition' the /dev/mdX device. Also, don't forget to save your current working config to /etc/mdadm/mdadm.conf using mkconf to ensure it stays working :-)
-
stdcerr almost 10 years@IanMacintosh is there a particular reason why you recommend to have only one partition per physical RAID device or is this your personal preference?
-
Ian Macintosh almost 10 yearsUse raid (mdadm) to make your array of disks redundant first, then use LVM to partition that space into volumes and then lay your filesystems onto the LVM volumes. ie, RAID is to make your disks redundant not to make volumes. LVM is to make volumes (it can RAID but not as well as mdadm). ie, my recommendation is to use each tool to its strength. YMMV :-)
-
stdcerr almost 10 years@IanMacintosh fai enough! Thanks for your suggestion! I'll need to find a tool to merge my partitions first without losing my data. I think
parted
might be helpful for that, won't it? -
Ian Macintosh almost 10 yearsDepends on if you're using LVM already or not. If you're using LVM you can move volumes from disk to disk trivially as long as you have the space available, and best of all, you don't even take them offline to do so. But we're getting too long on comments here! I suggest a new question if you're looking for how to shuffle with LVM (hint: see
pvmove
). If they're not on LVM, then rsync is a good way to do it online (again need disk space). BUT you don't have to shuffle your data right now. You can file the recommendation away for next time you set up your PC and do it then, it's not critical. -
Marcin Bobowski over 8 yearsYou have saved my life with this article. Finally after restoring superblock from a backup my raid1 WD MyBook World disk came back to live and allowed to mount from /dev/mdX device.
-
-
stdcerr almost 10 yearsHi @IanMacintosh, I added the results to EDIT 4 above, Thank you for looking at them!
-
Ian Macintosh almost 10 yearsWhat happened with the stop command followed by the
mdadm --assemble --scan
? If your devices were at all discoverable then they should have come through that fully assembled and started. Can you add the output ofcat /proc/mdstat
after you have done the first two commands? -
Ian Macintosh almost 10 yearsCan you also paste the output of
ls -l /dev/mapper
to that? You've already got the dmsetup ls output but the ls will help clarify and link the dm-3, 5 & 6 from your second paste. After that I can give you the proper device names for a completemdadm --examine
. If your layout is still the same as from the beginning of your post (cat /proc/mdstat) then also paste the output ofmdadm --examine /dev/dm-[356]
-
stdcerr almost 10 yearsHi Ian, I've provided the output of all the above commands above under EDIT 5 - Thanks!
-
stdcerr almost 10 yearsYay, I got
/dev/md127
mounted now but/dev/md0
and/dev/md1
are still not accessible, as described in EDIT5 -
Ian Macintosh almost 10 yearsHmm, your raid arrays changed since your first post. You must stop all the arrays except for /dev/md127. Type
mdadm --stop /dev/mdXXXXX
for all the other array untilcat /proc/mdstat
only shows /dev/md127. Then typemdadm --assemble --scan
. That should start all your arrays correctly, except for /dev/md127 which is now out of sync and needs rebuilding, probably manually. Will tackle that later. I'll look at the raid details (--examine data) in the morning. -
stdcerr almost 10 yearsEDIT 6 above provides some interesting further analasys, unsure if you can help me or if I rather open another thread as it doesn't necessarily look like my problem it's still
mdadm
related... Thanks for your insight! -
maxweber almost 8 yearsI had a raid but it no longer would load, just like OP. Thanks to Ian Macintosh the RAID is now mounting fine AND with the original name. In trying I upgraded from Ubuntu 14.4 to 16.4; but the fix was the above. No idea how mdadm.conf got destroyed.