Replacing 2 disks (raid 1) to a larger pair under a 3ware controller in linux

5,466

Solution 1

ok this answer appends to grs's answer. so credits do go there for 70% of the answer.

Notes:

  • if this answer suits you , GET a backup NOW.
  • if you own a UPS connect it on the pc in question NOW.
  • The following procedure was carried out in linux, on DATA disk arrays. It may need some modifications to work on OS/boot arrays.
  • The following procedure requires several restarts which I don't state since I managed to complete the procedure over a time span of a couple of weeks where I tried and failed on noumerous occasions. Good thing though, while the pc was ON, I did not have anymore downtime and did not lose data (ie. I did not need to rely on my backups).

sum up of the situation:

  • can't migrate from raid1 to raid1 on a 3ware 9650se system.
  • can't split the disks and expect that the /c0/uX will automagically update its array size.
  • you must delete a unit and recreate it for it to detect the larger disks.

So the key is to delete one drive at a time and recreate a new array every time . Overall:

  1. split the raid1 array. This will generate 2 arrays with the old size of disks (2TB in my case).

    tw_cli /c0/u1 migrate type=single
    

    the precious /dev/sdX which was pointing to the raid1 /u1, should still exist (and work!). and you'll also get a new unit /u2 which is based on the 2nd drive of the mirror.

  2. delete the disk of the mirror that is not used any longer (it belongs to a new unit /u2 in my case and must have acquired a new /dev/sdX file descriptor after a restart).

    tw_cli /c0/u2 del
    
  3. create a new single unit with the unused disk. NOTE: I did this step from BIOS so I am not sure this is how it should be done as I state below. In BIOS I did "create unit" not "migrate". Someone please verify this.

    tw_cli /c0/u2 migrate type=single disk=3
    

    the new /u2 unit should 'see' all the 3TB.

  4. go ahead and transfer the data from the 2TB disk to the 3TB disk.

  5. once the data are on the new unit update all references to the new /dev/sdX.

  6. the remaining 2TB disk is (should be!) now unused so go ahead and delete it.

    tw_cli /c0/u1 del
    
  7. create a new single unit with the unused disk.

    tw_cli /c0/u1 migrate type=single disk=2
    

    the new /u1 unit should have 3TB space now, too.

  8. finally, take a deep breath and merge the 2 single disks to the new expanded raid1

    tw_cli /c0/u2 migrate type=raid1 disk=2
    

    /u1 should now disappear and unit /u2 should start rebuilding.

  9. Enjoy life. Like, seriously.

Solution 2

You would need to update the u1 size before increasing the filesystem from within the OS. The latter will not "see" the new size until the 3ware controller notify it.

The unit capacity expansion in 3ware is called migration. I am certain it works for RAID5 and 6, didn't try it with RAID1. Here is an example of migration command to run:

# tw_cli /c0/u1 migrate type=raid1 disk=p2-p3

When this completes fdisk -l /dev/sdb should yield 3TB and vgdisplay <VG name> will list some empty space. From there you would increase the VG size, then the respective LV and finally the filesystem within the LV.

Edit: I think you are out of luck - see page 129 on the User Guide.
You could migrate your RAID1 to different array type.

Here is an alternative (it carries some risk, so make sure your backups are good):

  1. tw_cli /c0/u1 migrate type=single - this will break apart your u1 unit into two single drives;
  2. tw_cli /c0/u1 migrate type=raid1 disk=2-3 - this should migrate your single unit back to RAID1 with the correct size

Of course, there are alternative approaches to this, the one I listed above is in case you want your data online all the time.

Solution 3

Maybe your kernel did not receive updates from the controller.

Try to update the disks info by typing :

partprobe /dev/sdb

It will force the kernel to re-read the partition tables and disks properties.

Also try :

hdparm -z /dev/sdb

and/or:

sfdisk -R /dev/sdb

cause partprobe not always works...

Share:
5,466

Related videos on Youtube

nass
Author by

nass

Updated on September 18, 2022

Comments

  • nass
    nass almost 2 years

    I have a 3ware 9650se with 2x 2TB disks in raid-1 topology.

    I recently replaced the disks with 2 larger (3TB) ones, one by one. The whole migration went smoothly. The problem I have now is, I don't know what more I have to do to make the system aware of the increase in size of this drive.

    Some info:

    root@samothraki:~# tw_cli /c0 show all
    
    /c0 Model = 9650SE-4LPML
    /c0 Firmware Version = FE9X 4.10.00.024
    /c0 Driver Version = 2.26.02.014
    /c0 Bios Version = BE9X 4.08.00.004
    /c0 Boot Loader Version = BL9X 3.08.00.001
    
    ....
    
    Unit  UnitType  Status         %RCmpl  %V/I/M  Stripe  Size(GB)  Cache  AVrfy
    ------------------------------------------------------------------------------
    u0    RAID-1    OK             -       -       -       139.688   Ri     ON     
    u1    RAID-1    OK             -       -       -       **1862.63**   Ri     ON     
    
    VPort Status         Unit Size      Type  Phy Encl-Slot    Model
    ------------------------------------------------------------------------------
    p0    OK             u0   139.73 GB SATA  0   -            WDC WD1500HLFS-01G6 
    p1    OK             u0   139.73 GB SATA  1   -            WDC WD1500HLFS-01G6 
    p2    OK             u1   **2.73 TB**   SATA  2   -            WDC WD30EFRX-68EUZN0
    p3    OK             u1   **2.73 TB**   SATA  3   -            WDC WD30EFRX-68EUZN0
    

    Note that the disks p2 & p3 are correctly identified as 3TB, but the raid1 array u1 is still seeing the 2TB array.

    After following the guide on LSI 3ware 9650se 10.2 codeset (note: the codeset 9.5.3 user guide contains exactly the same procedure).

    I triple sync my data and umount the raid array u1. Next I remove the raid array from command line using the command:

    tw_cli /c0/u1 remove
    

    and finally I rescan the controller to find the array again:

    tw_cli /c0 rescan
    

    unfortunately the new u1 array still identified the 2TB disk.

    What could be wrong?

    Some extra info. the u1 array corresponds to dev/sdb/ , which in turn corresponds to a physical volume of a larger LVM disk. Now that I replaced both the drives it appears that the partition table is empty. Yet the LVM disk works fine. Is that normal?!

    root@samothraki:~# fdisk -l /dev/sdb 
    
    Disk /dev/sdb: 2000.0 GB, 1999988850688 bytes
    255 heads, 63 sectors/track, 243151 cylinders, total 3906228224 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    
    root@samothraki:~# 
    
  • nass
    nass over 10 years
    unfortunately none of these worked :( .. it must be that the problem is still on the controllers side and not on the kernel...
  • nass
    nass over 10 years
    hi there, the command from above, causes an error: Error: (CLI:144) Invalid drive(s) specified. What can you make of it?
  • grs
    grs over 10 years
    My syntax is wrong, precisely the disk=p2-p3 part. Don't remember exactly, maybe it should be disk=2-3 instead. You could see the help page tw_cli /c0/u1 help.
  • nass
    nass over 10 years
    i have already seen it, it is not exactly intuitive what I should type there. disk=<p:-p..> . not sure exactly how to intepret that...
  • grs
    grs over 10 years
  • nass
    nass over 10 years
    nope tw_cli /c0/u1 migrate type=raid1 disk=2:3 (or 2-3) yield the response: Error: (CLI:008) Invalid ID specified. Found a specified ID already in use.
  • nass
    nass over 10 years
    thank you for your continued support. I have indeed split the array. Then after many failed attempts (based on your edit), I ended up deleting the unit sdc (from BIOS). Then I recreated a new single unit from BIOS again. Finally this new unit has capacity 3TB, but the old unit (sdb) is still 2TB, i'll have to delete that too from BIOS (i can't find the corresponding tw_cli cmd). Then what eludes me is how to convert u2 - (sdc) to a raid1 and attach u1 - (sdb) to it.
  • Dogsbody
    Dogsbody about 10 years
    I'm just about to attempt this so want to check some of your statements please. When you "transfer the data from the 2TB disk to the 3TB disk" are you just doing a dd? I guess you are also rebooting between steps 5 and 6?
  • nass
    nass about 10 years
    @Dogsbody I just did a simple copy. not dd.