Reassemble RAID 1 array from old system

7,343

Solution 1

It appears that I had a conflict between the dmraid setup and the mdadm setup. I don't understand the details, but what I finally did to fix it was stop the dmraid

dmraid -an

and then assemble the drives to a whole new md device:

mdadm --assemble /dev/md4 /dev/sdc /dev/sdd

When I did this, /dev/md126 and /dev/md126p1 mysteriously appeared (mysterious to me, but I'm sure someone can explain it), and I mounted md126p1:

mount /dev/md126p1 /mnt/olddrive

And voilà: my data reappeared! There were a couple of corrupted files, but no data loss.

Thank you @Dani_l and @MadHatter for your help!

Solution 2

A bit confused here - is it mdadm raid or lvm raid? In the question you mention lvm raid, yet keep trying to use mdadm raid.

for lvm - first use

pvscan -u

possibly

pvscan -a --cache /dev/sdc /dev/sdd

would be enough to recreate your device. if not, use

vgchange -ay VolGroup00

or

vgcfgrestore VolGroup00

The other possibility is that you used dmraid - can you try

dmraid -ay

but the disks must be connected to the intel fakeraid controller (make sure raid is enabled in bios for the ata slots the disks are connected to)

Share:
7,343
RD Miles
Author by

RD Miles

Updated on September 18, 2022

Comments

  • RD Miles
    RD Miles over 1 year

    I recently upgraded my OS from RHEL 5 to 6. To do so, I installed the new OS on new disks, and I want to mount the old disks. The old disks are listed as /dev/sdc and sdd in the new system, they were created as a RAID 1 array using LVM, using the default setup from the RHEL install GUI.

    I managed to mount the old disks and use them for the last two weeks, but after a reboot, they did not remount, and I can't figure out what to do to get them back on line. I have no reason to believe there is anything wrong with the disks.

    (I'm in the process of doing dd copy of the disks, I have an older backup, but I hope I don't have to use either of these...)

    Using fdisk -l :

    # fdisk -l
    
    Disk /dev/sdb: 300.1 GB, 300069052416 bytes
    255 heads, 63 sectors/track, 36481 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x00042e35
    
       Device Boot      Start         End      Blocks   Id  System
    /dev/sdb1   *           1       30596   245760000   fd  Linux raid autodetect
    /dev/sdb2           30596       31118     4194304   fd  Linux raid autodetect
    /dev/sdb3           31118       36482    43080704   fd  Linux raid autodetect
    
    Disk /dev/sda: 300.1 GB, 300069052416 bytes
    255 heads, 63 sectors/track, 36481 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x00091208
    
       Device Boot      Start         End      Blocks   Id  System
    /dev/sda1   *           1       30596   245760000   fd  Linux raid autodetect
    /dev/sda2           30596       31118     4194304   fd  Linux raid autodetect
    /dev/sda3           31118       36482    43080704   fd  Linux raid autodetect
    
    Disk /dev/sdc: 640.1 GB, 640135028736 bytes
    255 heads, 63 sectors/track, 77825 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x00038b0e
    
       Device Boot      Start         End      Blocks   Id  System
    /dev/sdc1               1       77825   625129281   fd  Linux raid autodetect
    
    Disk /dev/sdd: 640.1 GB, 640135028736 bytes
    255 heads, 63 sectors/track, 77825 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x00038b0e
    
       Device Boot      Start         End      Blocks   Id  System
    /dev/sdd1               1       77825   625129281   fd  Linux raid autodetect
    
    Disk /dev/md2: 4292 MB, 4292804608 bytes
    2 heads, 4 sectors/track, 1048048 cylinders
    Units = cylinders of 8 * 512 = 4096 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x00000000
    
    
    Disk /dev/md1: 251.7 GB, 251658043392 bytes
    2 heads, 4 sectors/track, 61439952 cylinders
    Units = cylinders of 8 * 512 = 4096 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x00000000
    
    
    Disk /dev/md127: 44.1 GB, 44080955392 bytes
    2 heads, 4 sectors/track, 10761952 cylinders
    Units = cylinders of 8 * 512 = 4096 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x00000000
    

    And then

    # mdadm --examine /dev/sd[cd]
    mdadm: /dev/sdc is not attached to Intel(R) RAID controller.
    mdadm: /dev/sdc is not attached to Intel(R) RAID controller.
    /dev/sdc:
              Magic : Intel Raid ISM Cfg Sig.
            Version : 1.1.00
        Orig Family : 8e7b2bbf
             Family : 8e7b2bbf
         Generation : 0000000d
         Attributes : All supported
               UUID : c8c81af9:952cedd5:e87cafb9:ac06bc40
           Checksum : 014eeac2 correct
        MPB Sectors : 1
              Disks : 2
       RAID Devices : 1
    
      Disk01 Serial : WD-WCASY6849672
              State : active
                 Id : 00010000
        Usable Size : 1250259208 (596.17 GiB 640.13 GB)
    
    [Volume0]:
               UUID : 03c5fad1:93722f95:ff844c3e:d7ed85f5
         RAID Level : 1
            Members : 2
              Slots : [UU]
        Failed disk : none
          This Slot : 1
         Array Size : 1250258944 (596.17 GiB 640.13 GB)
       Per Dev Size : 1250259208 (596.17 GiB 640.13 GB)
      Sector Offset : 0
        Num Stripes : 4883824
         Chunk Size : 64 KiB
           Reserved : 0
      Migrate State : idle
          Map State : uninitialized
        Dirty State : clean
    
      Disk00 Serial : WD-WCASY7183713
              State : active
                 Id : 00000000
        Usable Size : 1250259208 (596.17 GiB 640.13 GB)
    mdadm: /dev/sdd is not attached to Intel(R) RAID controller.
    mdadm: /dev/sdd is not attached to Intel(R) RAID controller.
    /dev/sdd:
              Magic : Intel Raid ISM Cfg Sig.
            Version : 1.1.00
        Orig Family : 8e7b2bbf
             Family : 8e7b2bbf
         Generation : 0000000d
         Attributes : All supported
               UUID : c8c81af9:952cedd5:e87cafb9:ac06bc40
           Checksum : 014eeac2 correct
        MPB Sectors : 1
              Disks : 2
       RAID Devices : 1
    
      Disk00 Serial : WD-WCASY7183713
              State : active
                 Id : 00000000
        Usable Size : 1250259208 (596.17 GiB 640.13 GB)
    
    [Volume0]:
               UUID : 03c5fad1:93722f95:ff844c3e:d7ed85f5
         RAID Level : 1
            Members : 2
              Slots : [UU]
        Failed disk : none
          This Slot : 0
         Array Size : 1250258944 (596.17 GiB 640.13 GB)
       Per Dev Size : 1250259208 (596.17 GiB 640.13 GB)
      Sector Offset : 0
        Num Stripes : 4883824
         Chunk Size : 64 KiB
           Reserved : 0
      Migrate State : idle
          Map State : uninitialized
        Dirty State : clean
    
      Disk01 Serial : WD-WCASY6849672
              State : active
                 Id : 00010000
        Usable Size : 1250259208 (596.17 GiB 640.13 GB)
    

    Trying to assemble:

    # mdadm --assemble /dev/md3 /dev/sd[cd]
    mdadm: no RAID superblock on /dev/sdc
    mdadm: /dev/sdc has no superblock - assembly aborted
    

    I've tried:

    # mdadm --examine --scan /dev/sd[cd]
    ARRAY metadata=imsm UUID=c8c81af9:952cedd5:e87cafb9:ac06bc40
    ARRAY /dev/md/Volume0 container=c8c81af9:952cedd5:e87cafb9:ac06bc40 member=0 UUID=03c5fad1:93722f95:ff844c3e:d7ed85f5
    

    And adding this to the /etc/mdadm.conf file, but it doesn't seem to help. I'm not sure what to try next. Any help would appreciated.

    EDIT 1: Does "Magic : Intel Raid ISM Cfg Sig." indicate that I need to use dmraid?

    EDIT 2: As suggested below, I tried dmraid, but I don't know what the response means:

    # dmraid -ay
    RAID set "isw_cdjaedghjj_Volume0" already active
    device "isw_cdjaedghjj_Volume0" is now registered with dmeventd for monitoring
    RAID set "isw_cdjaedghjj_Volume0p1" already active
    RAID set "isw_cdjaedghjj_Volume0p1" was not activated
    

    EDIT 2b: So, now I can see something here:

    # ls /dev/mapper/
    control  isw_cdjaedghjj_Volume0  isw_cdjaedghjj_Volume0p1
    

    but it doesn't mount:

    # mount /dev/mapper/isw_cdjaedghjj_Volume0p1 /mnt/herbert_olddrive/
    mount: unknown filesystem type 'linux_raid_member'
    

    EDIT 2c: Ok, maybe this might help:

    # mdadm -I /dev/mapper/isw_cdjaedghjj_Volume0
    mdadm: cannot open /dev/mapper/isw_cdjaedghjj_Volume0: Device or resource busy.
    
    # mdadm -I /dev/mapper/isw_cdjaedghjj_Volume0p1
    #
    

    The second command returns nothing. Does this mean anything or am I way off track?

    EDIT 3: /proc/mdstat:

    # cat /proc/mdstat
    Personalities : [raid1]
    md127 : active raid1 sda3[1] sdb3[0]
          43047808 blocks super 1.1 [2/2] [UU]
          bitmap: 0/1 pages [0KB], 65536KB chunk
    
    md1 : active raid1 sda1[1]
          245759808 blocks super 1.0 [2/1] [_U]
          bitmap: 2/2 pages [8KB], 65536KB chunk
    
    md2 : active raid1 sda2[1]
          4192192 blocks super 1.1 [2/1] [_U]
    
    unused devices: <none>
    

    md1 and md2 are raid arrays on sda and sdb, which are used by the new OS.

    • MadHatter
      MadHatter almost 10 years
      Have you tried mdadm --assemble /dev/md3 /dev/sd[cd]1? Also, could you edit into your question the output of cat /proc/mdstat - that md127 looks like it might be the right thing already.
    • RD Miles
      RD Miles almost 10 years
      Also tried the same thing using sd[cd] without the 1, and get: mdadm: no RAID superblock on /dev/sdc
    • MadHatter
      MadHatter almost 10 years
      That's weird, because you've shown us those partitions. Could you try it without the glob, just do mdadm --assemble /dev/md3 /dev/sdc1 /dev/sdd1? Also, still waiting for the cat /proc/mdstat.
    • RD Miles
      RD Miles almost 10 years
      Same response: mdadm: cannot open device /dev/sdc1: No such file or directory... I don't really understand what Volume0 means in the --examine output: I think it indicates that there is a VolumeGroup, but I'm not sure how to set it up.
  • RD Miles
    RD Miles almost 10 years
    Using vgscan -vvvv, I find this phrase "#filters/filter-partitioned.c:45 /dev/sdd: Skipping: Partition table signature found" ... does this help? Should I delete the partition table?
  • Dani_l
    Dani_l almost 10 years
    Quite possibly you had a dmraid setup using the bios "raid" capability.
  • RD Miles
    RD Miles almost 10 years
    I added the output from dmraid -ay, @Dani_l, can you make sense of it?
  • Dani_l
    Dani_l almost 10 years
    looks like dmraid is catching the already active sd[ab] devices. A stupid question - what's the result of mdadm -I /dev/md/Volume0 ? That's a capital i, btw, not lowercase L
  • RD Miles
    RD Miles almost 10 years
    mdadm: stat failed for /dev/md/Volume0: No such file or directory.
  • Dani_l
    Dani_l almost 10 years
    actually, I thought it did show attachment to intel controller - hint the metadata on /dev/sdc is imsm
  • RD Miles
    RD Miles almost 10 years
    True. Then what is mdadm complaining about?
  • Dani_l
    Dani_l almost 10 years
    but seems like /dev/sdd might not be. can you check in bios or during boot if you have interactive intel management at POST?
  • RD Miles
    RD Miles almost 10 years
    The same message appears for both sdc and sdd. I'll look into the bios.
  • RD Miles
    RD Miles almost 10 years
    @Dani_I - The motherboard does have LSI Logic Embedded SATA RAID and Intel® Matrix Storage Manager. However, the jumpers have been set to LSI, and LSI has been turned on in the BIOS.
  • Dani_l
    Dani_l almost 10 years
    Can you invoke the LSI console during reboot, before OS? and make sure the disks are configured as mirrored devices there? you might have to press ctrl-m for lsi (or ctrl-i for intel console). try to enter both and see which configurations exists there