My RAID 1 always renames itself to /dev/md127 after rebooting | DEBIAN 10
Solution 1
SOLUTION
I couldn't find a solution with an already created RAID 1 configuration, so backup your data, because for this solution I'll give you'll need to delete your RAID 1 first. Actually, I just deleted the virtual machine I was working with and created a new one.
So this it's going to work with Debian 10, and with a clean machine
Create a new clean raid1 configuration
In my case I have 3 virtual disks, so I run the command like this (remember that first you need to make partitions of the same size and type Linux raid autodetect)
sudo mdadm --create /dev/md1 --level=mirror --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1
Edit mdadm.conf
Go to the file /etc/mdadm/mdadm.conf
, delete all content, and replace it with this instead:
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#
DEVICE partitions
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# instruct the monitoring daemon where to send mail alerts
MAILADDR root
# definitions of existing MD arrays
Add a reference to your array inside the previous file
Login as root and do this
sudo mdadm --detail --scan >> /etc/mdadm/mdadm.conf
Now the contents of this file are
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#
DEVICE partitions
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# instruct the monitoring daemon where to send mail alerts
MAILADDR root
# definitions of existing MD arrays
ARRAY /dev/md1 metadata=1.2 name=buster:1 UUID=1279dbd2:d0acbb4f:0b34e3e1:3de1b3af
ARRAY /dev/md1 metadata=1.2 name=buster:1 UUID=1279dbd2:d0acbb4f:0b34e3e1:3de1b3af (this was the new line added referencing the array)
If the command has added something before the ARRAY line, delete it.
Just in case
Run sudo update-initramfs -u
Permanently mount a partition of your raid
Mount it it's optional, but I think that'll want to use the storage of your RAID1.
- Get the UUID of your partition with
sudo blkid
- Edit
/etc/fstab
with this new line of codeUUID=d367f4ed-2b37-4967-971a-13d9129fff4f /home/vagrant/raid1 ext3 defaults 0 2
Replace the UUID with the one you got of your partition, and the filesystem with the one you have in your partition
The contents of my /etc/fstab
now are
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point> <type> <options> <dump> <pass>
# / was on /dev/vda1 during installation
UUID=b9ffc3d1-86b2-4a2c-a8be-f2b2f4aa4cb5 / ext4 errors=remount-ro 0 1
# swap was on /dev/vda5 during installation
UUID=f8f6d279-1b63-4310-a668-cb468c9091d8 none swap sw 0 0
/dev/sr0 /media/cdrom0 udf,iso9660 user,noauto 0 0
UUID=d367f4ed-2b37-4967-971a-13d9129fff4f /home/vagrant/raid1 ext3 defaults 0 2
UUID=d367f4ed-2b37-4967-971a-13d9129fff4f /home/vagrant/raid1 ext3 defaults 0 2 (here you can see clearly the line I added)
NOW YOU CAN REBOOT
The name now is not going to change.
If I run sudo fdisk -l
I get this (I'll show just the relevant information)
Disk /dev/md1: 1022 MiB, 1071644672 bytes, 2093056 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x37b2765e
Device Boot Start End Sectors Size Id Type
/dev/md1p1 2048 2093055 2091008 1021M 83 Linux
If I run df -Th
I get
Filesystem Type Size Used Avail Use% Mounted on
udev devtmpfs 227M 0 227M 0% /dev
tmpfs tmpfs 49M 3.4M 46M 7% /run
/dev/sda1 ext4 19G 4.1G 14G 24% /
tmpfs tmpfs 242M 0 242M 0% /dev/shm
tmpfs tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs tmpfs 242M 0 242M 0% /sys/fs/cgroup
/dev/md1p1 ext3 989M 1.3M 937M 1% /home/vagrant/raid1
tmpfs tmpfs 49M 0 49M 0% /run/user/1000
You see that is also mounted. And finally, If I run cat /proc/mdstat
, I get
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md1 : active raid1 sdd1[2] sdc1[1] sdb1[0]
1046528 blocks super 1.2 [3/3] [UUU]
unused devices: <none>
The raid1 is working, with sdb1, sdc1 and sdd1.
Now this is COMPLETE! You can reboot and your raid name will always remain.
All sources I used so I could found the solution it worked for me
https://superuser.com/questions/287462/how-can-i-make-mdadm-auto-assemble-raid-after-each-boot
https://ubuntuforums.org/showthread.php?t=2265120
https://askubuntu.com/questions/63980/how-do-i-rename-an-mdadm-raid-array
https://serverfault.com/questions/267480/how-do-i-rename-an-mdadm-raid-array
https://bugzilla.redhat.com/show_bug.cgi?id=606481
Some are more relevant for this solution than others, but ALL OF THEM helped me reach this solution.
Wow, you have read a lot isn't it? Now you can relax if your problem was solved, hope this helped you out! See you!
Solution 2
I removed the ARRAY in "/etc/mdadm/mdadm.conf" run the next command
sudo update-initramfs -u
then rebooted the system. Than added the ARRAY with
sudo mdadm --detail --scan | sudo tee -a /etc/mdadm/mdadm.conf
opened the /etc/mdadm/mdadm.conf
and changed it to
ARRAY /dev/md/0 metadata=1.2 name=raspberrypi-nas:0 UUID=86275e90:a19b3601:fc78b0d8:57f9c56a
ARRAY /dev/md/1 metadata=1.2 name=raspberrypi:1 UUID=e8f0c48c:448321f6:1db0f830:ea39bc42
like i wanted too, run
sudo update-initramfs -u
and rebooted the system. Now everything worked as aspected.
Related videos on Youtube
Adrián Jaramillo
Updated on September 18, 2022Comments
-
Adrián Jaramillo over 1 year
PROBLEM
I create a RAID 1 configuration, I name it /dev/md1, but when I reboot, the name always changes to /dev/md127