CentOS 7 created mdadm array disappears after reboot
Solution 1
For all you folks, I have found a workaround that works for me.. May be this may help you guys.. Here it is:
Raid mount disappears because the system is not reading the mdadm.conf file during boot or startup. So, what I did is I edited the /etc/rc.d/rc.local
file to include the below commands:
sleep 10
mdadm --assemble --scan
sleep 10
mount -a
Now, everytime I reboot the system, it reads this file, run the commands mentioned in it and mount the raid mount..
Solution 2
I came across this question which helped me a lot during troubleshooting. But none of the answers could solve my problem.
So maybe this helps someone having the same issue. In my case I have two NVME drives on a RedHat Linux 7.2.
SELinux has a problem with nvme and prevents mdadm to work with them.
Symptoms:
Create software RAID as described in the question of this thread. After a reboot the RAID is gone:
/proc/mdstat
is empty and/dev/md0
does not existmdadm --assemble --scan
brings the RAID up.
Solution:
I disabled selinux in my case. I know this is not possible on every system. But maybe you get in the right direction from my post.
Solution 3
In theory, one can make raids out of "bare drives" (non-partitioned), but I noticed your disks are showing up as gpt-partitioned, not md drives. In general, I've found better success / stability by partitioning my disk, and then using partitions in my md arrays.
I'd try creating a partition table, setting the partition type as linux raid autodetect (fd in fdisk if I recall correctly). Then recreating your array.
Also, I found that if I did NOT use an mdadm.conf, I encountered better success. Modern versions of md tools will get all the information they need from the superblocks of the partitions involved.
Related videos on Youtube
uwsublime
Updated on September 18, 2022Comments
-
uwsublime almost 2 years
I created a raid1 using the below disks and command:
$ ls -l /dev/disk/by-id/ata-ST3000* lrwxrwxrwx 1 root root 9 Sep 19 07:27 /dev/disk/by-id/ata-ST3000DM001-9YN166_Z1F04NR1 -> ../../sdc lrwxrwxrwx 1 root root 9 Sep 19 07:27 /dev/disk/by-id/ata-ST3000DM001-9YN166_Z1F190E3 -> ../../sda $ mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/disk/by-id/ata-ST3000DM001-9YN166_Z1F190E3 /dev/disk/by-id/ata-ST3000DM001-9YN166_Z1F04NR1
I added the pertinent information to mdadm.conf. I used 'mdadm --detail --scan >> /etc/mdadm.conf' for the ARRAY line:
$ cat /etc/mdadm.conf DEVICE /dev/disk/by-id/ata-ST3000DM001-9YN166_Z1F190E3 DEVICE /dev/disk/by-id/ata-ST3000DM001-9YN166_Z1F04NR1 ARRAY /dev/md0 metadata=1.2 name=jaime.WORKGROUP:0 UUID=93f2cb73:2d124630:562f1dd9:bf189029 MAILADDR your@address
I created and mounted the filesystem:
$ mkfs -t xfs /dev/md0 $ mount -t xfs /dev/md0 /data
After rebooting, /dev/md0 no longer exists and I can't assemble the array:
$ mdadm --assemble /dev/md0 /dev/disk/by-id/ata-ST3000DM001-9YN166_Z1F190E3 /dev/disk/by-id/ata-ST3000DM001-9YN166_Z1F04NR1 mdadm: Cannot assemble mbr metadata on /dev/disk/by-id/ata-ST3000DM001-9YN166_Z1F190E3 mdadm: /dev/disk/by-id/ata-ST3000DM001-9YN166_Z1F190E3 has no superblock - assembly aborted $ blkid /dev/sda: PTTYPE="gpt" /dev/sdb1: UUID="5c7d3f2b-c975-46a3-a116-e9fc156c1de5" TYPE="xfs" /dev/sdb2: UUID="JhoqjI-N6R6-O9zt-Xumq-TnFX-OUCd-Lg9YHy" TYPE="LVM2_member" /dev/sdc: PTTYPE="gpt" /dev/mapper/centos-swap: UUID="3b882d4d-b900-4c59-9912-60a413699db4" TYPE="swap" /dev/mapper/centos-root: UUID="08df953d-d4f4-4e83-bf4b-41f14a98a12e" TYPE="xfs" /dev/mapper/centos-home: UUID="2358f723-5e7f-49ed-b207-f32fe34b1bbc" TYPE="xfs"
-
slm almost 10 yearsConfirm that the drives are there and identified by the kernel to start.
blkid
orlsblk
. Also look in the output ofdmesg
for any messages related to the RAID. -
slm almost 10 yearsAlso I've never seen the
/dev/disk/by-id/...
used when constructing RAIDs. You typically use/dev/sda1
/dev/sdb1
, where these are partitions on the device that were created usingparted
,fdisk
, orgdisk
. -
uwsublime almost 10 yearsUsing /dev/sdXX will cause problems when you add drives to the system and those identifiers change. I read a LOT on this over the past week or so and it seems like the recommendation is to use either the by-id identifier or the by-uuid identifier. Also, I've seen arrays created both on partitions and on raw devices. I can't seem to find any hard fast best practices on these points...
-
uwsublime almost 10 years/dev/sda and /dev/sdc are the RAID drives.
-
slm almost 10 yearsThings to try: serverfault.com/questions/43897/…
-
slm almost 10 yearsAlso double check that you have the right UUIDs:
mdadm --examine /dev/sda
andmdadmn --examine /dev/sdc
.
-
-
uwsublime almost 10 yearsgdisk shows this option: fd00 Linux RAID
-
uwsublime almost 10 yearsI've gone forward with partitioning first and so far the results are good. I will update again once all of my arrays are in place... May be a few days. Thanks for the help all!
-
uwsublime almost 10 yearsI've got all of my mirrored pools setup now on "Linux Raid" (fd00) partitions. I did not use an mdadm.conf file. All seems to be working great, even when moving drives to different SATA ports.