How to mount a disk from destroyed raid system?

97,009

Solution 1

In my case I brought up CentOS 7 and tried following everyone's instructions on this page. I kept running into a device busy message. The reason in my opinion why you are getting the

mdadm: cannot open device /dev/sda1: Device or resource busy

error message is because the device is already mounted as something else.

I also did not want to make any changes to the disk at all since my use case was to extract a very large file from my RAID1 array that failed to be extracted every possible way otherwise and the fastest way was to pull one of the drives out, I do want to put the drive back in and still have my configuration in place as well.

Here is what I did after doing some online research on other sites: NOTE: NAS:0 is the name of my NAS device so substitute appropriately.

It was automatically mounted although it would say that its not mounted if you were to run the mount command, you can verify that it was mounted by running:

[root@localhost Desktop]# cat /proc/mdstat 
Personalities : [raid1] 
md127 : active (auto-read-only) raid1 sdb2[0]
      1952996792 blocks super 1.2 [2/1] [U_]

unused devices: <none>

Notice it was automatically mounted under /dev/md127 for me.

Ok then:

[root@localhost Desktop]# mdadm -A -R /dev/md9 /dev/sdb2 
mdadm: /dev/sdb2 is busy - skipping

[root@localhost Desktop]# mdadm --manage --stop /dev/md/NAS\:0 
mdadm: stopped /dev/md/NAS:0

[root@localhost Desktop]# mdadm -A -R /dev/md9 /dev/sdb2
mdadm: /dev/md9 has been started with 1 drive (out of 2).

[root@localhost Desktop]# mount /dev/md9 /mnt/

That did it for me.

If in doubt, DD the drive to make a full copy and use CentOS or other Linux Live CD.

Solution 2

If you possibly can you should make a dd image of your entire disk before you do anything, just in case.

You should be able to mount /dev/sda3 directly once mdadm releases it:

mdadm --stop /dev/md2

mount /dev/sda3 /mnt/rescue

If that doesn't work testdisk can usually find filesystems on raw block devices.

Solution 3

I did it by the "hard way": (first if its possible clone this disk before you do anything!)

dmesg for the raid-disk or try (example: sdc1)

$ fdisk -l

Change the RAID-DISK-Flag to your Linux filesystem (ext3 or something), save this and reboot.

After that

$ mdadm --zero-superblock /dev/sdx 

and voila you can mount

$ mount /dev/sdc1 /mnt
Share:
97,009

Related videos on Youtube

Naveed Ahmed
Author by

Naveed Ahmed

Updated on September 18, 2022

Comments

  • Naveed Ahmed
    Naveed Ahmed over 1 year

    I have a horrible situation where I have to restore data from damaged raid system in a rescue Debian Linux. I just want to mount them all to /mnt/rescue in read only modus to be able to copy VMWare GSX images to another machine and migrate them to ESXi later on. The output for relevant commands is as follows.

    fdisk -l
    
    Disk /dev/sda: 1500.3 GB, 1500301910016 bytes
    255 heads, 63 sectors/track, 182401 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x0005e687
    
       Device Boot      Start         End      Blocks   Id  System
    /dev/sda1               1         523     4200997   fd  Linux raid autodetect
    /dev/sda2             524         785     2104515   fd  Linux raid autodetect
    /dev/sda3             786      182401  1458830520   fd  Linux raid autodetect
    
    Disk /dev/sdb: 1500.3 GB, 1500301910016 bytes
    255 heads, 63 sectors/track, 182401 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x00014fc7
    
       Device Boot      Start         End      Blocks   Id  System
    /dev/sdb1               1         523     4200997   fd  Linux raid autodetect
    /dev/sdb2             524         785     2104515   fd  Linux raid autodetect
    /dev/sdb3             786      182401  1458830520   fd  Linux raid autodetect
    
    Disk /dev/md0: 4301 MB, 4301717504 bytes
    2 heads, 4 sectors/track, 1050224 cylinders
    Units = cylinders of 8 * 512 = 4096 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x00000000
    
    Disk /dev/md0 doesn't contain a valid partition table
    
    Disk /dev/md1: 2154 MB, 2154954752 bytes
    2 heads, 4 sectors/track, 526112 cylinders
    Units = cylinders of 8 * 512 = 4096 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x00000000
    
    Disk /dev/md1 doesn't contain a valid partition table
    

    I was trying to mount the disks as follows.

    mount -o ro /dev/sda1 /mnt/rescue
    

    Then I get following error.

    mount: unknown filesystem type 'linux_raid_member'
    

    Guessing file system is not going well either.

    mount -o ro -t ext3 /dev/sda1 /mnt/rescue/
    mount: /dev/sda1 already mounted or /mnt/rescue/ busy
    

    So I tried to create a virtual device as follows.

    mdadm -A -R /dev/md9 /dev/sda1
    

    This results in the following message.

    mdadm: cannot open device /dev/sda1: Device or resource busy
    mdadm: /dev/sda1 has no superblock - assembly aborted
    

    Now I am lost, I have no idea how to recover the disks and get the data back. The following is the output of mda --examine for all 3 disks (I think it should be 3x raid1 disks).

    /dev/sda1:

              Magic : a92b4efc
            Version : 0.90.00
               UUID : 6708215c:6bfe075b:776c2c25:004bd7b2 (local to host rescue)
      Creation Time : Mon Aug 31 17:18:11 2009
         Raid Level : raid1
      Used Dev Size : 4200896 (4.01 GiB 4.30 GB)
         Array Size : 4200896 (4.01 GiB 4.30 GB)
       Raid Devices : 3
      Total Devices : 2
    Preferred Minor : 0
    
        Update Time : Sun Jun  2 00:58:05 2013
              State : clean
     Active Devices : 2
    Working Devices : 2
     Failed Devices : 0
      Spare Devices : 0
           Checksum : 9070963e - correct
             Events : 19720
    
    
          Number   Major   Minor   RaidDevice State
    this     1       8        1        1      active sync   /dev/sda1
    
       0     0       0        0        0      removed
       1     1       8        1        1      active sync   /dev/sda1
       2     2       8       17        2      active sync   /dev/sdb1
    

    /dev/sda2:

              Magic : a92b4efc
            Version : 0.90.00
               UUID : e8f7960f:6bbea0c7:776c2c25:004bd7b2 (local to host rescue)
      Creation Time : Mon Aug 31 17:18:11 2009
         Raid Level : raid1
      Used Dev Size : 2104448 (2.01 GiB 2.15 GB)
         Array Size : 2104448 (2.01 GiB 2.15 GB)
       Raid Devices : 3
      Total Devices : 2
    Preferred Minor : 1
    
        Update Time : Sat Jun  8 07:14:24 2013
              State : clean
     Active Devices : 2
    Working Devices : 2
     Failed Devices : 0
      Spare Devices : 0
           Checksum : 120869e1 - correct
             Events : 3534
    
    
          Number   Major   Minor   RaidDevice State
    this     1       8        2        1      active sync   /dev/sda2
    
       0     0       0        0        0      removed
       1     1       8        2        1      active sync   /dev/sda2
       2     2       8       18        2      active sync   /dev/sdb2
    

    /dev/sda3:

              Magic : a92b4efc
            Version : 0.90.00
               UUID : 4f2b3b67:c3837044:776c2c25:004bd7b2 (local to host rescue)
      Creation Time : Mon Aug 31 17:18:11 2009
         Raid Level : raid5
      Used Dev Size : 1458830400 (1391.25 GiB 1493.84 GB)
         Array Size : 2917660800 (2782.50 GiB 2987.68 GB)
       Raid Devices : 3
      Total Devices : 2
    Preferred Minor : 2
    
        Update Time : Sat Jun  8 14:47:00 2013
              State : clean
     Active Devices : 1
    Working Devices : 1
     Failed Devices : 1
      Spare Devices : 0
           Checksum : 2b2b2dad - correct
             Events : 36343894
    
             Layout : left-symmetric
         Chunk Size : 64K
    
          Number   Major   Minor   RaidDevice State
    this     1       8        3        1      active sync   /dev/sda3
    
       0     0       0        0        0      removed
       1     1       8        3        1      active sync   /dev/sda3
       2     2       0        0        2      faulty removed
    
    cat /proc/mdstat
    Personalities : [raid1]
    md2 : inactive sda3[1](S) sdb3[2](S)
          2917660800 blocks
    
    md1 : active raid1 sda2[1] sdb2[2]
          2104448 blocks [3/2] [_UU]
    
    md0 : active raid1 sda1[1] sdb1[2]
          4200896 blocks [3/2] [_UU]
    

    md2 seems to be damaged and it is probably the raid with my VMWare images.

    I would like to access the data from md2 (the data on the active and not damaged disk, that is /dev/sda3) by mounting it outside of the raid.

    Is it a good idea to just execute

    mdadm --manage /dev/md2 --remove /dev/sda3 
    

    (would it even work as md2 is not seen by fdisk)?

    Should I re-assamble the other raids md0 and md1 by running

    mdadm --assemble /dev/md0 /dev/sda1 /dev/sdb1
    

    ?

    UPDATE 0: I am not able to assemble md0 and md2.

    root@rescue ~ # mdadm --assemble /dev/md0 /dev/sda1 /dev/sdb1
    mdadm: cannot open device /dev/sda1: Device or resource busy
    mdadm: /dev/sda1 has no superblock - assembly aborted
    root@rescue ~ # mdadm --assemble /dev/md2 /dev/sda3 /dev/sdb3
    mdadm: cannot open device /dev/sda3: Device or resource busy
    mdadm: /dev/sda3 has no superblock - assembly aborted
    

    Mounting with mount -t auto is not possible.

    root@rescue ~ # mount -t auto -o ro /dev/md0 /mnt/rescue/
    /dev/md0 looks like swapspace - not mounted
    mount: you must specify the filesystem type
    root@rescue ~ # mount -t auto -o ro /dev/md2 /mnt/rescue/
    mount: you must specify the filesystem type
    

    Mounting /dev/md1 works but no VMWare data on it.

    root@rescue /mnt/rescue # ll
    total 139M
    -rw-r--r-- 1 root root 513K May 27  2010 abi-2.6.28-19-server
    -rw-r--r-- 1 root root 631K Sep 16  2010 abi-2.6.32-24-server
    -rw-r--r-- 1 root root 632K Oct 16  2010 abi-2.6.32-25-server
    -rw-r--r-- 1 root root 632K Nov 24  2010 abi-2.6.32-26-server
    -rw-r--r-- 1 root root 632K Dec  2  2010 abi-2.6.32-27-server
    -rw-r--r-- 1 root root 632K Jan 11  2011 abi-2.6.32-28-server
    -rw-r--r-- 1 root root 632K Feb 11  2011 abi-2.6.32-29-server
    -rw-r--r-- 1 root root 632K Mar  2  2011 abi-2.6.32-30-server
    -rw-r--r-- 1 root root 632K Jul 30  2011 abi-2.6.32-33-server
    lrwxrwxrwx 1 root root    1 Aug 31  2009 boot -> .
    -rw-r--r-- 1 root root 302K Aug  4  2010 coffee.bmp
    -rw-r--r-- 1 root root  89K May 27  2010 config-2.6.28-19-server
    ...
    

    UPDATE 1:

    I tried to stop md2 and md0 and assemble once again.

    mdadm -S /dev/md0
    
    root@rescue ~ # mount -t auto -o ro /dev/md0 /mnt/rescue/
    /dev/md0 looks like swapspace - not mounted
    mount: you must specify the filesystem type
    
    mdadm -S /dev/md2
    
    root@rescue ~ # mount -t auto -o ro /dev/md2 /mnt/rescue/
    mount: you must specify the filesystem type
    

    Any ideas?

    UPDATE 2:

    Assembling from one disk is not working due to following error message.

    root@rescue ~ # mdadm -S /dev/md2
    root@rescue ~ # mdadm --assemble /dev/md2 /dev/sda3
    mdadm: /dev/md2 assembled from 1 drive - not enough to start the array.
    
    root@rescue ~ # mdadm -S /dev/md2
    mdadm: stopped /dev/md2
    root@rescue ~ # mdadm --assemble /dev/md2 /dev/sdb3
    mdadm: /dev/md2 assembled from 1 drive - not enough to start the array.
    

    Even new raid fails.

    root@rescue ~ # mdadm -S /dev/md9
    mdadm: stopped /dev/md9
    root@rescue ~ # mdadm --assemble /dev/md9 /dev/sda3
    mdadm: /dev/md9 assembled from 1 drive - not enough to start the array.
    
    root@rescue ~ # mdadm -S /dev/md9
    mdadm: stopped /dev/md9
    root@rescue ~ # mdadm --assemble /dev/md9 /dev/sdb3
    mdadm: /dev/md9 assembled from 1 drive - not enough to start the array.
    

    Creating new md disk fails too.

    root@rescue ~ # cat /proc/mdstat
    Personalities : [raid1]
    md0 : active raid1 sda1[1] sdb1[2]
          4200896 blocks [3/2] [_UU]
    
    md1 : active raid1 sda2[1] sdb2[2]
          2104448 blocks [3/2] [_UU]
    
    unused devices: <none>
    root@rescue ~ # mdadm -A -R /dev/md9 /dev/sda3
    mdadm: failed to RUN_ARRAY /dev/md9: Input/output error
    mdadm: Not enough devices to start the array.
    root@rescue ~ # cat /proc/mdstat
    Personalities : [raid1] [raid6] [raid5] [raid4]
    md9 : inactive sda3[1]
          1458830400 blocks
    
    md0 : active raid1 sda1[1] sdb1[2]
          4200896 blocks [3/2] [_UU]
    
    md1 : active raid1 sda2[1] sdb2[2]
          2104448 blocks [3/2] [_UU]
    
    unused devices: <none>
    root@rescue ~ # mdadm -S /dev/md9
    mdadm: stopped /dev/md9
    root@rescue ~ # mdadm -A -R /dev/md9 /dev/sdb3
    mdadm: failed to RUN_ARRAY /dev/md9: Input/output error
    mdadm: Not enough devices to start the array.
    

    UPDATE 3:

    Removing disks from md2 is not working.

    mdadm --remove /dev/md2 /dev/sda3
    mdadm: cannot get array info for /dev/md2
    

    UPDATE 4:

    Finally, running assemble with --force hopefully did it. I am now copying files to another server.

    • Admin
      Admin almost 11 years
      mdadm --assemble is the way to go. Try without --remove.
    • Admin
      Admin almost 11 years
      Maybe sd?1 is swapspace. Try to assemble md1 and md2 and to mount with mount -t auto ....
    • Admin
      Admin almost 11 years
      @HaukeLaging: I tried to assemble md0, md1 and md2 (see updated post). Only md1 assembles successfully and mounts. The other two fail to assemble and mount. Any ideas?
    • Admin
      Admin almost 11 years
      Your data are likely on md2, the largest volume
    • Admin
      Admin almost 11 years
      Try assembling it from one volume, not two.
    • Admin
      Admin almost 11 years
      @sendmoreinfo: This is not working: mdadm --assemble /dev/md9 /dev/sda3 mdadm: /dev/md9 assembled from 1 drive - not enough to start the array.
    • Admin
      Admin almost 11 years
      @TonyStark Why didn't you try mdadm --assemble /dev/md9 /dev/sda3 /dev/sdb3? Otherwise you need --run. Is it possible that md2 is an LVM PV? You could run pvscan; pvdisplay
    • Admin
      Admin almost 11 years
      @HaukeLaging: I did but it failed with "mdadm: Not enough devices to start the array.".
    • Admin
      Admin almost 11 years
      What about mdadm -A -R /dev/md9 /dev/sdb3? Maybe there are serious problems with sda3.
    • Admin
      Admin over 9 years
      Were you able to resolve this issue? Please consider posting a self-answer with the solution that ended up working for you (or accepting the existing answer if that helped) if you did, for the benefit of future visitors.
  • Ferenc Géczi
    Ferenc Géczi about 6 years
    This answer helped me the most. Thanks! For me it was also mounted under /dev/md127 so I issued the stop like this mdadm --manage --stop /dev/md127.
  • Stephen Rauch
    Stephen Rauch almost 4 years
    @DavidJEddy, I did not answer the question. I merely edited it.
  • David J Eddy
    David J Eddy almost 4 years
    @Eugene. You are a life saver!
  • Captain Fantastic
    Captain Fantastic almost 4 years
    after removing a drive from the array by sudo mdadm --zero-superblock --force /dev/sda3 it will have an unknown partition type; testdisk command should be able to find this partition and change it from "Linux Raid Auto" to just "Linux"