New mdadm RAID vanish after reboot

12,656

Solution 1

The reason is two-fold:

  • Your (new)mdadm.conf is not being read by the time the arrays are assembled.

    This is because it happens before your root file system is mounted (obviously: you have to have a working RAID device to access it), so this file is being read from the initramfs image containing the so-called pre-boot environment.

    So to make this work, after updating the config, run

    # update-initramfs -u
    

    to get the initramfs updated.

  • Your RAID device is not being discovered and assembled automatically at boot.

    To provide for that, change the types of member partitions to 0xfd (Linux RAID autodetect) — for MBR-style partition tables or to 00FD (same) for GPT. You can use fdisk or gdisk, respectively, to do that.

    mdadm runs at boot (off the initramfs), scans available partitions, reads metadata blocks from all of them having type 0xfd and assembles and starts all the RAID devices it is able to. This does not require a copy of an up-to-date mdadm.conf in the initramfs image.

What method to prefer, is up to you. I, personally, like the second but if you happen to have several (many) RAID devices and only want to start several of them at boot (required to have a working root filesystem) and activate the rest later, the first approach or a combination of them is a way to go.

Solution 2

I know it's an old post, but I was struggling with this issue and this is my result:

My disks were "frozen" - Seagate disks. You can check, if you have same issue by entering command:

hdparm -I /dev/sdb

Which showed:

Security: 
Master password revision code = 65534
    supported
not enabled
not locked
    **frozen**
not expired: security count
    supported: enhanced erase

I wasn't able to change this setting. The disks worked fine with regular partitions, but when I was formatting them as linux raid, they lost they partition table and were "empty" after reboot.

I created raid on partitions, not on devices:

mdadm --create --verbose /dev/md1 --level=1 --raid-devices=2 /dev/sdb1 /dev/sdc1

And now they are fine after reboot and everything works as expected.

Share:
12,656

Related videos on Youtube

peon
Author by

peon

Updated on September 18, 2022

Comments

  • peon
    peon over 1 year

    I have problems with mdadm after reboot, I can't reassemble /dev/md0 .

    I work on debian wheezy.

    I have done the following steps::

    sudo mdadm --create --verbose /dev/md0 --level=6 --raid-devices=4 /dev/sd[b-e]
    cat /proc/mdstat
    sudo mdadm --readwrite /dev/md0
    sudo mdadm --detail --scan >> /etc/mdadm/mdadm.conf
    echo check > /sys/block/md0/md/sync_action
    sudo pvcreate /dev/md0
    sudo pvdisplay
    sudo vgcreate vgraid6 /dev/md0
    sudo lvcreate -l 100%FREE -n lvHD vgraid6
    sudo mkfs.ext4 -v /dev/vgraid6/lvHD
    

    Here all works successfully.

    After mounting the RAID, I could use it, create files, access it from other PCs...

    Now comes the problem:

    After rebooting the server (reboot now), the RAID does not exist anymore, /dev/md0 is gone.

    First I checked /etc/mdadm/mdadm.conf:

    # mdadm.conf
    #
    # Please refer to mdadm.conf(5) for information about this file.
    #
    #DEVICE partitions containers
    ...
    CREATE owner=root group=disk mode=0660 auto=yes
    MAILADDR root
    ARRAY /dev/md0 metadata=1.2 name=media:0 UUID=cb127a0b:ad4eb61d:e0ba8f82:db4b062d
    

    After I try :

    $ mdadm --stop --scan
    $ mdadm --assemble --scan
    

    or:

    $ sudo  mdadm --assemble /dev/md0 /dev/sd[b-e]
        mdadm: Cannot assemble mbr metadata on /dev/sdb
        mdadm: /dev/sdb has no superblock - assembly aborted
    
    
    $ sudo  mdadm --examine /dev/sd[b-e]
    /dev/sdb:
       MBR Magic : aa55
    Partition[0] :   4294967295 sectors at            1 (type ee)
    /dev/sdc:
       MBR Magic : aa55
    Partition[0] :   4294967295 sectors at            1 (type ee)
    /dev/sdd:
       MBR Magic : aa55
    Partition[0] :   4294967295 sectors at            1 (type ee)
    /dev/sde:
       MBR Magic : aa55
    Partition[0] :   4294967295 sectors at            1 (type ee)
    

    The mdadm daemon is running (ps aux | grep mdadm)

    empty /proc/mdstat

    $ cat /proc/mdstat
    Personalities :
    unused devices: <none>
    

    What's wrong?

  • kostix
    kostix over 9 years
    Consider marking my answer as accepted then. And really you should have been commenting on it instead of posting this in the form of an answer (which is isn't).
  • sudo
    sudo over 8 years
    This should be the accepted answer.
  • liang
    liang over 6 years
    Assuming multiple raid device in the answer, what's the approach to "activate the rest later"?
  • kostix
    kostix over 6 years
    @liang, the kernel would bring up the rest of the raid devices by its init service as a part of the "normal" system boot (that is, what happens ater the initial boot sequence—that one involving the initramfs—is complete).
  • kostix
    kostix over 6 years
    @liang, IOW, the only thing the "early boot environment" is required to do is bringing up those RAID devices which contain the OS. The rest can be done by the OS itself when the bootstrapping process is handed off to it.