Moving RAID 5 to another computer

10,561

Solution 1

I came across this answer trying to move my RAID1 array of two disks from an old box to a new one. The post suggested to simply move the disks to the new computer and connect them, irrelevant which SATA ports on the MB. After that this command was supposed to be enough:

mdadm --assemble --scan

However, for me this didn't detect any arrays to assemble. So after a bit more research I found that there's a config file which has the details of Your array - hopefully you still have it on the old machine:

cat /etc/mdadm/mdadm.conf

For me this was a rather simplistic affair; the new and old box had only one line of difference to start with, depending on the complexity of your array this might be a bit more involved of course:

ARRAY /dev/md/0  metadata=1.2 UUID=<THE__UUID> name=qnap:0

I put that line in manually in the new server's /etc/mdadm/mdadm.conf file, and ran the command again:

mdadm --assemble --scan

This time it found the array and initialized it in readonly state. You can now run a check of the array (all for all arrays in the computer):

/usr/share/mdadm/checkarray --all

or just see what state it's in using:

cat /proc/mdstat

That last file gets updated with progress information from mdstat.

Flipping it to readwrite state with this:

mdadm --readwrite md127

caused it to re-sync which of course takes a good few hours depending on the size and config of Your array, but after that I had no problems opening the encrypted volume on the raid and mounting the LVM partitions from it. I took md127 for that last command from checking where the symlink at /dev/md/0 points to, which is what the mdadm.conf file is listing as physical device for the array

Hope this helps anyone :)

Solution 2

I eventually found the answer on https://serverfault.com/questions/32709/how-do-i-move-a-linux-software-raid-to-a-new-machine.

Here is what I did:

Booted from a new installation and saved the old fstab and mdadm.conf files to my cloud.

It looks like the Raid partition on the sda2 (one the 4 physical disks) had in fact failed. I had the boot on sda1 and the Raid on sda2,sdb1,sdc1,sdd1.

I reinstalled Ubuntu onto a new drive sde.

Reinstalled mdadm.

I knew where the 4 partitions were because I had not changed the order and boot was now sde1;

I forced the Raid array to reassemble on the same partitions;

Very luckily three drives out of four are working so the Raid is degraded but so far its working.

The steps I followed are neatly set out in the above link. I must say that I found Digital Ocean to be of great assistance along the way as well. It always seems so simple afterwards but the pathway is treacherous.

Share:
10,561

Related videos on Youtube

user68988
Author by

user68988

Updated on September 18, 2022

Comments

  • user68988
    user68988 over 1 year

    RAID 5 problem

    The crisp issue is that I am not sure how to active an inactive RAID without my efforts being irreversible. Attempt to reassemble non-destructively. Is that even possible?

    Transferring RAID 5 to a new computer if a CPU fails.

    Background

    1. UBUNTU 16.04 running mdadm software RAID 5 failed and reboots with Press ctrl D to continue.

    2. I think my RAID ARRAY appears to be intact. See the printouts below. I want to move my RAID 5 from one computer to another in case the problem was hardware.

    3. The problem with the UBUNTU boot partition may simply be that it ran out of space on the boot partition. The boot partition was only 10G and although the Ubuntu installation was originally a minimal server install, I expanded it to run the desktop as well. 10G may not have been enough but ran like that for 2 months. I just wanted the graphic interface.

    4. Also I read recently that the RAID 5 partitions should not have exceeded 1.5 T on each disk. I didn't know that at the time and it has been running for about 6 months like that although recently it may have exceeded that limit. It is running at about 6T now.

    5. My plan is to move the RAID 5 on a new machine with a fresh installation of Ubuntu 16 on a new disk ‘sde’ and remount the RAID on the new system.

    Questions

    1. How do I move the RAID 5 to a new computer? If UBUNTU failed when booting then I should be able to assemble the RAID on a new computer.

    2. Does "assemble" overwrite the RAID partitions? Will it be irreversible?

    3. If the RAID ran out of space one would expect the RAID to fail and not the UBUNTU boot.

    4. Alternatively can I safely remove all the devices from the ARRAY and mount as normal conventional partitions to read my data? There is about 6T of data spread across the RAID.

    Status Reports

    root@UbuntuServer17:~# cat /etc/fstab
    # /etc/fstab: static file system information.
    # 
    # fstab on WDD running as 5th disk sde
    # Use 'blkid' to print the universally unique identifier for a
    # device; this may be used with UUID= as a more robust way to name devices
    # that works even if disks are added and removed. See fstab(5).
    #
    # <file system> <mount point>   <type>  <options>       <dump>  <pass>
    # / was on /dev/sda2 during installation
    UUID=f37bd21c-9464-4763-b3e7-7f9f6f5154df /               ext4    errors=remount-ro 0       1
    # /boot/efi was on /dev/sda1 during installation
    UUID=4993-9AE3  /boot/efi       vfat    umask=0077      0       1
    # swap was on /dev/sda3 during installation
    UUID=e3b9f5e9-5eb9-47e0-9288-68649263093c none            swap    sw              0       0
    # Steve added - from RAID17 when it crashed
    # / was on /dev/sda1 during installation on RAID17
    #UUID=1672f12a-9cf2-488b-9c8f-a701f1bc985c /               ext4    errors=remount-ro 0       1
    #/dev/md0p1 /media/steve/RAID17 ext4    data=ordered,relatime,stripe=384,nodev,nosuid   0   0
    #UUID=1672f12a-9cf2-488b-9c8f-a701f1bc985c /               ext4    errors=remount-ro 0       1
    #/dev/md0   /media/steve/RAID17 ext4    data=ordered,relatime,stripe=384,nodev,nosuid   0   0
    
    
    
    
    root@UbuntuServer17:~# cat /etc/mdadm/mdadm.conf
    # mdadm.conf
    #
    # Please refer to mdadm.conf(5) for information about this file.
    #
    
    # by default (built-in), scan all partitions (/proc/partitions) and all
    # containers for MD superblocks. alternatively, specify devices to scan, using
    # wildcards if desired.
    #DEVICE partitions containers
    
    # auto-create devices with Debian standard permissions
    CREATE owner=root group=disk mode=0660 auto=yes
    
    # automatically tag new arrays as belonging to the local system
    HOMEHOST <system>
    
    # instruct the monitoring daemon where to send mail alerts
    MAILADDR root
    
    # definitions of existing MD arrays
    
    # This file was auto-generated on Sun, 05 Feb 2017 20:34:00 +0200
    # by mkconf $Id$
    # Steve added - maybe should add uuid to fstab file to mount on WD at start - But no sure
    
    # ARRAY /dev/md0 uuid=3b92382f:78784c2b:e7a07a35:c1afcf1d
    ARRAY /dev/md0 uuid=32c91cbf:266a5d14:182f1b34:f92b1ebe
    
    
    
    
    root@UbuntuServer17:~# cat /proc/mdstat
    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
    md0 : inactive sda2[0](S) sdd1[4](S) sdb1[1](S) sdc1[2](S)
          7803273216 blocks super 1.2
    
    unused devices: <none>
    
    
    root@UbuntuServer17:~# mdadm --examine --scan
    ARRAY /dev/md/0  metadata=1.2 UUID=3b92382f:78784c2b:e7a07a35:c1afcf1d name=RAID17:0
    root@UbuntuServer17:~# 
    
    
    root@UbuntuServer17:~# sudo  fdisk -l
    Disk /dev/sda: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 4096 bytes / 4096 bytes
    Disklabel type: dos
    Disk identifier: 0x5120487a
    
    Device     Boot    Start        End    Sectors  Size Id Type
    /dev/sda1  *        2048   20482047   20480000  9.8G 83 Linux
    /dev/sda2       20514816 3907028991 3886514176  1.8T 83 Linux
    /dev/sda3       20482048   20514815      32768   16M 82 Linux swap / Solaris
    
    Partition table entries are not in disk order.
    
    
    Disk /dev/sdb: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 4096 bytes / 4096 bytes
    Disklabel type: dos
    Disk identifier: 0x000a439d
    
    Device     Boot Start        End    Sectors  Size Id Type
    /dev/sdb1        2048 3907028991 3907026944  1.8T 83 Linux
    
    
    Disk /dev/sdc: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 4096 bytes / 4096 bytes
    Disklabel type: dos
    Disk identifier: 0x00044e92
    
    Device     Boot Start        End    Sectors  Size Id Type
    /dev/sdc1        2048 3907028991 3907026944  1.8T 83 Linux
    
    
    Disk /dev/sdd: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 4096 bytes / 4096 bytes
    Disklabel type: dos
    Disk identifier: 0xc7703e92
    
    Device     Boot Start        End    Sectors  Size Id Type
    /dev/sdd1        2048 3907028991 3907026944  1.8T 83 Linux
    
    
    Disk /dev/sde: 465.8 GiB, 500107862016 bytes, 976773168 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disklabel type: gpt
    Disk identifier: AEC0A022-299A-4283-9F5F-2FCC4CC4609E
    
    Device         Start       End   Sectors   Size Type
    /dev/sde1       2048   1050623   1048576   512M EFI System
    /dev/sde2    1050624 960124927 959074304 457.3G Linux filesystem
    /dev/sde3  960124928 976771071  16646144     8G Linux swap
    root@UbuntuServer17:~# 
    
    root@UbuntuServer17:~# sudo dumpe2fs /dev/sda2
    dumpe2fs 1.42.13 (17-May-2015)
    Filesystem volume name:   <none>
    Last mounted on:          <not available>
    Filesystem UUID:          b474c4d4-af7f-4730-b746-a0c0c49ca08d
    Filesystem magic number:  0xEF53
    Filesystem revision #:    1 (dynamic)
    Filesystem features:      has_journal ext_attr resize_inode dir_index filetype extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize
    Filesystem flags:         signed_directory_hash 
    Default mount options:    user_xattr acl
    Filesystem state:         clean
    Errors behavior:          Continue
    Filesystem OS type:       Linux
    Inode count:              121454592
    Block count:              485814272
    Reserved block count:     24290713
    Free blocks:              478141459
    Free inodes:              121454581
    First block:              0
    Block size:               4096
    Fragment size:            4096
    Reserved GDT blocks:      908
    Blocks per group:         32768
    Fragments per group:      32768
    Inodes per group:         8192
    Inode blocks per group:   512
    Flex block group size:    16
    Filesystem created:       Sat Feb 25 02:16:09 2017
    Last mount time:          n/a
    Last write time:          Sat Feb 25 02:16:09 2017
    Mount count:              0
    Maximum mount count:      -1
    Last checked:             Sat Feb 25 02:16:09 2017
    Check interval:           0 (<none>)
    Lifetime writes:          135 MB
    Reserved blocks uid:      0 (user root)
    Reserved blocks gid:      0 (group root)
    First inode:              11
    Inode size:           256
    Required extra isize:     28
    Desired extra isize:      28
    Journal inode:            8
    Default directory hash:   half_md4
    Directory Hash Seed:      e1e7da74-6e2f-4fa4-a9e0-a13a44338170
    Journal backup:           inode blocks
    dumpe2fs: Corrupt extent header while reading journal super block
    root@UbuntuServer17:~# 
    
  • ankit7540
    ankit7540 over 3 years
    Could you accept your answer which would close this question.