How to grow RAID 1 following disk upgrade?

7,601

mdadm can not grow the RAID past partition boundaries. You should have enlarged the partitions before re-syncing each drive, then the grow should have worked. Can you tell us which metadata format you are using? In a new install it should be 1.2 but if it's sufficiently old it may be 0.90. Growing in your situation would be easier if it was 1.2.

mdadm --detail /dev/md1
/dev/md1:
        Version : 1.2

If it's 1.2, all you have to do is make the partitions (sdb2 sda2) larger. Make sure only the end of the partition moves, the start must remain the same, or your RAID will be broken. If you are unsure, do it for one disk only, so the other can still save your behind in case something goes wrong. You can do this with fdisk, but a better alternative is parted or even gparted if you prefer a GUI.

For parted, the following command should work (dangerous, writes partition table without asking):

parted /dev/sdb unit s rm 2 mkpart primary 609374208 100%

Check fdisk output again if it looks correct; reboot to see if everything still works (/proc/mdstat should show the RAID being in sync UU); do the same for /dev/sda and then after another reboot, try to grow again.

If it's still 0.90 metadata, I'd take this opportunity to build a new RAID 1 with 1.2 metadata. In a live CD, fail one drive, create a new RAID on it with one drive missing, dd or rsync -aAHSX the data over, add the other drive, etc.

For growing, I think you'd still have to fail a drive, enlarge the partition, then re-add it. 0.90 stores the metadata at the end of the device and it will not be found if you move the end by enlarging the partition.

Share:
7,601

Related videos on Youtube

ComfortablyDumb
Author by

ComfortablyDumb

Updated on September 18, 2022

Comments

  • ComfortablyDumb
    ComfortablyDumb almost 2 years

    I have successfuly replaced 2 x 320GB disks with 2 x 1TB and re-synched /dev/md0 & /dev/md1.

    "sudo mdadm --grow /dev/md0 --size=max" results in error "mdadm: component size of /dev/md0 unchanged at 304686016K"

    How can I grow /dev/md0 to the full 1TB?

    Output from fdisk -l & cat /proc/mdstat follows

    Disk /dev/sda: 1000.2 GB, 1000204886016 bytes
    255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x000bccd9
    
       Device Boot      Start         End      Blocks   Id  System
    /dev/sda1   *        2048   609374207   304686080   fd  Linux RAID autodetect
    /dev/sda2       609374208   624998399     7812096   fd  Linux RAID autodetect
    
    Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
    255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x000baab1
    
       Device Boot      Start         End      Blocks   Id  System
    /dev/sdb1   *        2048   609374207   304686080   fd  Linux RAID autodetect
    /dev/sdb2       609374208   624998399     7812096   fd  Linux RAID autodetect
    
    Disk /dev/md1: 7999 MB, 7999520768 bytes
    2 heads, 4 sectors/track, 1953008 cylinders, total 15624064 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x00000000
    
    Disk /dev/md1 doesn't contain a valid partition table
    
    Disk /dev/md0: 312.0 GB, 311998480384 bytes
    2 heads, 4 sectors/track, 76171504 cylinders, total 609372032 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x00000000
    
    Disk /dev/md0 doesn't contain a valid partition table
    
    
    mick@mick-desktop:~/Desktop$ cat /proc/mdstat
    Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] 
    md0 : active raid1 sdb1[1] sda1[0]
          304686016 blocks [2/2] [UU]
    
    md1 : active raid1 sdb2[1] sda2[0]
          7812032 blocks [2/2] [UU]
    
    unused devices: <none>
    
  • ComfortablyDumb
    ComfortablyDumb almost 11 years
    Thanks for your response. The version is 0.9, so I will try to build a v1.2 raid as you suggest. Any preference as to which live disc to use?
  • frostschutz
    frostschutz almost 11 years
    The Ubuntu Desktop CD works fine, I even use it as a rescue CD for other distros; you can install any missing software within the live environment (provided you have sufficient RAM).
  • Lars Nordin
    Lars Nordin over 9 years
    Thanks for posting. I tried growing a RAID 1 with 0.9 metadata format but couldn't. I ended doing as you suggested and broke the RAID and re-built it using 1.2 metadata format. Two things that I ran into: 1) I had to force creating the new RAID 1 since I didn't have both disks 2) I had to specify the stripe unit when creating a XFS filesystem on top of the new half RAID - apparently the mkfs.xfs command looks at the RAID configuration to set the stripe units.
  • frostschutz
    frostschutz over 9 years
    @LarsNordin, did you use --raid-devices=1? usually it works fine with --raid-devices=2 and specifying one disk as missing. The XFS problem is strange considering RAID1 does not have stripes...
  • Lars Nordin
    Lars Nordin over 9 years
    @frostschutz, I used --raid-devices=2 and listed the 1st disk removed from the old RAID 1 array so I to add --force. For XFS, I should have phrased it "I ended up specifying the ..." (not "I had to ...") because it is an optimal performance issue (not required) - see xfs.org/index.php/…