How do I fix "mdadm: /dev/sdc1 not large enough to join array" for three identical discs?

9,261

Solution 1

Ok, what's going on is that you originally built the array with the whole disks, without any partitions. It looks like you may have later added a partition table, which kind of corrupts the array. You can't add the disk partition now because it is too small since mdadm expects the whole disk.

The errors you see from grub are due to the fact that with the 0.9 metadata format, it can not tell if the raid metadata is supposed to apply to the whole disk, or the partition. You should rebuild the array using the newer metadata format, and preferably partition the individual disks first this time.

Solution 2

You need to disable HPA on your disk.

check if HPA enabled

$ hdparm -N /dev/sdc

   /dev/sdc:

    max sectors   = 586070255/586072368, HPA is enabled

then disable HPA

$ hdparm -N p586072368 /dev/sdc

'p' is needed for persistence after reboot. Than you need to reboot your computer an add disk to RAID

Share:
9,261
jeeloo
Author by

jeeloo

Updated on September 18, 2022

Comments

  • jeeloo
    jeeloo over 1 year

    After uppgrading to Ubuntu 12.04 LTS in my server grub started to complain and give errors about my raid array, everything still seems to be working but it is a bit unnerving to have grub giving errors.

    Setting up grub-pc (1.99-21ubuntu3.7) ...
    error: found two disks with the index 2 for RAID md0.
    error: superfluous RAID member (3 found).
    

    I'm getting a lot of these errors when grub is updated.

    Facts are: I have three identical discs in a raid5 set up, on two of the discs there is one primary partition which is added to the array but the third disk is added without a primary partition. ie. mdam --manage /dev/md_d0 --add /dev/sdc

    I'm guessing that this is the reason for grub complaining.

    Since discovering this problem I have disabled the disc that is missing a primary partition, created a primary partition, verified that it looks the same with cfdisk /dev/xxx and then Print Partition table. And tried to add the new partition to the raid array and that is when I get the message that the partition is to small to be added to the raid array.

    > sudo mdadm /dev/md_d0 --add /dev/sdc1

    mdadm: /dev/sdc1 not large enough to join array

    The partition tables all look the same,

     Partition Table for /dev/sdc
    
                   First       Last
     # Type       Sector      Sector   Offset    Length   Filesystem Type (ID) Flag
    -- ------- ----------- ----------- ------ ----------- -------------------- ----
       Pri/Log           0        2047*     0#       2048*Free Space           None
     1 Primary        2048* 3907029167*     0  3907027120*Linux raid auto (FD) None
    
    Partition Table for /dev/sdd
    
                   First       Last
     # Type       Sector      Sector   Offset    Length   Filesystem Type (ID) Flag
    -- ------- ----------- ----------- ------ ----------- -------------------- ----
       Pri/Log           0        2047*     0#       2048*Free Space           None
     1 Primary        2048* 3907029167*     0  3907027120*Linux raid auto (FD) None
    
    Partition Table for /dev/sde
    
                   First       Last
     # Type       Sector      Sector   Offset    Length   Filesystem Type (ID) Flag
    -- ------- ----------- ----------- ------ ----------- -------------------- ----
       Pri/Log           0        2047*     0#       2048*Free Space           None
     1 Primary        2048* 3907029167*     0  3907027120*Linux raid auto (FD) None
    

    Or actualy if print the partition table as raw data in cfdisk there is some differences but I can not decipher what that means.

    -> diff sde.raw sdc.raw 
    1c1
     Disk Drive: /dev/sde
    ---
     Disk Drive: /dev/sdc
    30c30
     0x1B0: 00 00 00 00 00 00 00 00 B7 E9 70 74 00 00 00 20
    ---
     0x1B0: 00 00 00 00 00 00 00 00 4B 0C 58 1C 00 00 00 20
    

    I realized that I could try to copy the mbr from one of the working discs using dd, but still I get the same error even though the partition table is identical in the raw output from cfdisk.

    $ sudo dd if=/dev/sdd of=/tmp/sdd-mbr.bin bs=512 count=1
    $ sudo dd if=/tmp/sdd-mbr.bin of=/dev/sdc bs=512 count=1
    
    $ cat /proc/partitions
    major minor  #blocks  name
       8       48 1953514584 sdd
       8       49 1953513560 sdd1
       8       32 1953514584 sdc
       8       33 1953513560 sdc1
       8       64 1953514584 sde
       8       65 1953513560 sde1
    

    Now the raw comparison of the partition table gives identical output and the partitions seems to be of the same size but I still get the same error when trying to add /dev/sdc1 to the array.

    I guess my question is if is there is any way to fix this without having to take the whole array apart and recreate it from scratch ?

    Output from mdadm -D /dev/md_d0 the array is still rebuilding since I added /dev/sdc again.

    /dev/md_d0:
            Version : 0.90
      Creation Time : Sat Aug 14 21:06:13 2010
         Raid Level : raid5
         Array Size : 3907028992 (3726.03 GiB 4000.80 GB)
      Used Dev Size : 1953514496 (1863.02 GiB 2000.40 GB)
       Raid Devices : 3
      Total Devices : 3
    Preferred Minor : 0
        Persistence : Superblock is persistent
    
        Update Time : Fri Jan 11 18:36:06 2013
              State : clean, degraded, recovering 
     Active Devices : 2
    Working Devices : 3
     Failed Devices : 0
      Spare Devices : 1
    
             Layout : left-symmetric
         Chunk Size : 64K
    
     Rebuild Status : 53% complete
    
               UUID : 74998045:22316376:01f9e43d:ac30fbff (local to host server)
             Events : 0.19988
    
        Number   Major   Minor   RaidDevice State
           3       8       32        0      spare rebuilding   /dev/sdc
           1       8       64        1      active sync   /dev/sde
           2       8       48        2      active sync   /dev/sdd
    

    I realize now that it looks like the whole array is made up of the actual devices rather than the partiotions, then the question is why sdd1 & sde1 partitios are left on the harddrives and over written on /dev/sdc as soon as I add it to the array.

    • psusi
      psusi over 11 years
      What does mdadm -D /dev/md_d0 show?
  • jeeloo
    jeeloo over 11 years
    Ok, so I guess what you are saying is that I will have to start over. There is no way to save the array in its current state ?
  • psusi
    psusi over 11 years
    @jeeloo, well, it seems you have re-added the whole drive so the array is fine, but to make grub happy with it yes, you will have to rebuild it. Actually I think you can zero out the superblocks and recreate the array with metadata 1.0 and --assume clean and that would fix it, but certainly have a backup first.
  • m3nda
    m3nda almost 8 years
    Can "with the 0.9 metadata format" that be avoided? I just removed a raid device "just to test", and make changes on the content... and now I cannot join it again. If I rebuild those partitions i should sync before, and I expected the mdadm to do so. I'll try to sync device with dd then rebuild again the md without formatting.