md raid not mounted by dracut
Solution 1
The dracut documentation implies that any md raid arrays should be automatically assembled, and that the rd.md.uuid
parameter should only be used if you only want certain arrays assembled as part of the boot process.
It seems that in reality, the arrays are not assembled automatically, and are in fact only assembled when the rd.md.uuid
parameter is set (for each array that needs to be assembled). It could be that since the rd.lvm.lv
parameter was already set, that it somehow interfered with md
, but I don't have the time to test that.
In short, adding rd.md.uuid
parameters for both of my arrays to the GRUB_CMDLINE_LINUX
variable in /etc/default/grub
, and then regenerating the grub config fixed the issue for me.
Solution 2
Adding rd.md=1
and rd.md.conf=1
and rd.auto=1
parameter to the GRUB_CMDLINE_LINUX
variable in /etc/default/grub
, and then regenerating the grub config fixed a similar issue of mine. These parameters are zero by default (dracut.cmdline
documentation does not state it explicitly but they are).
Of course adding rd.md.uuid
alone also works, because this explicitly starts the required array. But I am lazy, an prefer the general parameters. The rd.md.uuid
version has the advantage that only the required array starts at boot time.
Related videos on Youtube
dghodgson
Updated on September 18, 2022Comments
-
dghodgson over 1 year
Background
I'm running Centos 7. Originally, it was running on a single disk that looked something like this:
1 200M EFI System (/boot/efi) 2 500M Microsoft basic (/boot) 3 465.1G Linux LVM LVM VG centos - LVM LV ext4 centos-root (/) - LVM LV swap centos-swap (swap)
This was just a temporary solution as it was originally supposed to be installed on a Linux software RAID1 array. I got around to migrating it today. This is what it currently looks like:
Both new disks have this partition layout: 1 200M EFI System (/boot/efi) 2 457.6G Linux RAID /dev/md0 RAID1 (for boot and LVM) 3 8G Linux RAID /dev/md1 RAID0 (so 16GB total, for swap) /dev/md0 looks like this: 1 500M Linux filesystem (/boot) 2 457G Linux LVM (centos-root is migrated to this) LVM now has only one LV, centos-root
/etc/mdadm.conf
looks like this:ARRAY /dev/md0 level=raid1 num-devices=2 metadata=1.2 name=main.centos.local:0 UUID=5b5057b4:4235ba4b:5342dfda:acf63302 devices=/dev/sda2,/dev/sdb2 ARRAY /dev/md1 level=raid0 num-devices=2 metadata=1.2 name=main.centos.local:1 UUID=f82a8c99:9b391d83:4efc9456:9e9bad98 devices=/dev/sda3,/dev/sdb3
/etc/fstab
looks like this:/dev/mapper/centos-root / xfs defaults 0 0 UUID=fcb5f82f-ce6b-460b-800f-329e010bc403 /boot xfs defaults 0 0 UUID=C532-14AE /boot/efi vfat umask=0077,shortname=winnt 0 0 /dev/md1 swap swap defaults 0 0
blkid
outputs this (relevant entries only):/dev/sdb1: SEC_TYPE="msdos" UUID="C532-14AE" TYPE="vfat" PARTLABEL="EFI System Partition" PARTUUID="ed301bbd-c15c-40af-ae75-bf238d0e6270" /dev/sda1: SEC_TYPE="msdos" UUID="C532-14AE" TYPE="vfat" PARTLABEL="EFI System Partition" PARTUUID="f3a76412-41a0-4e04-9b04-ad1c159133cf" /dev/md0p1: LABEL="boot" UUID="fcb5f82f-ce6b-460b-800f-329e010bc403" TYPE="xfs" PARTLABEL="primary" PARTUUID="df8d6481-c6ce-423a-b5d5-205d355e5653" /dev/md0p2: UUID="7LfywM-oPHy-MTEt-swlI-EVbZ-opTo-m82E6R" TYPE="LVM2_member" PARTLABEL="primary" PARTUUID="19e7f9d5-a955-4036-8338-03a748faa1f6" /dev/mapper/centos-root: UUID="deaa9788-b487-4991-adf7-2945788fb6cd" TYPE="xfs"
I have a script which automatically mounts the other EFI partition to
/boot/efi_[device]
, and when the kernel is updated, the grub.cfg gets copied to this partition to keep everything in sync./dev/sda1
and/dev/sdb1
are kept in sync by the script (I've verified this), so it shouldn't be an issue that fstab mounts either one to/boot/efi
(this also means that if one drive was removed due to failure, the system is still guaranteed to boot). I could have put swap in a LV to simplify things, but the RAID0 gets better performance (for what it's worth) and I get an extra 16GB of space.I migrated the LV from the old drive to the new PV using the following commands:
pvcreate /dev/md0p2 vgextend centos /dev/md0p2 pvmove /dev/sdg3 vgreduce centos /dev/sdg3
Then I regenerated the initramfs with
dracut
(after backing up the original), and finally regenerated grub.cfg. Afterwards, I mounted the new/boot
and/boot/efi
partitions and copied everything over.Problem
After disconnecting the old drive and booting, dracut fails to find my RAID arrays, and of course the
/boot
partition and my LVG as well. It appears that it's simply not callingmdadm --assemble
on/dev/md0
and/dev/md
. I'm able to do just that from thedracut
prompt, after whichlvm_scan
finds my LVG, I can link/dev/centos/root
to/dev/root
, and the system continues booting without any problems once exiting the prompt. Everything seems to be exactly where it should be.There was a kernel update available, so I tried installing it (assuming I messed something up the first time around when regenerating the initramfs and grub.cfg files), but no dice. System still fails in the exact same way. This is true when I boot from either EFI partition manually (as it should be since the two are identical).
Link to rdsosreport.txt on pastebin
What am I missing here? How do I get dracut to assemble my arrays?
-
Thomas about 8 yearsDid you run dracut with
-a mdraid
option to add the needed files? It might be that due to installed without mdraid, CentOS7 did not include this module by default. -
dghodgson about 8 yearsDidn't work. Either way, I shouldn't be able to assemble the arrays from the dracut prompt if that were the case.
-
-
Veelkoov about 8 yearsI know that my (this) comment is valueless, but THANK YOU!