Shrink RAID by removing a disk?

33,959

For this I am going to assume there are 12 disks in the array, and each are 1TB big.

That means there is 10TB of storage. This is for example, provided you are not using more than 6 disks (6TB) worth of storage, then it doesn't matter what size they are.

Oblig disclaimer: None of this may be supported by Synology, so I would check with them if this approach can cause problems, backup beforehand, and shutdown any synology services beforehand. Synology use standard md raid arrays as far as I know, and they are accessible if the disk are moved to a standard server that supports md - so there should be no problems.

Overview

The sequence goes like this:

  1. Reduce the filesystem size
  2. Reduce the logical volume size
  3. Reduce the array size
  4. Resize the file system back
  5. Convert the spare disks into hot spares

File system

Find the main partition, using df -h, it should look something like:

Filesystem                Size      Used Available Use% Mounted on
/dev/vg1/volume_1         10T       5T   5T         50% /volume1

Use this command to resize to the maximum it needs and no more:

umount /dev/vg1/volume_1
resize2fs -M /dev/vg1/volume_1

Now check:

mount /dev/vg1/volume_1 /volume1
df -h

Filesystem                Size      Used Available Use% Mounted on
/dev/vg1/volume_1         5T       5T    0T        100% /volume1

Volume

To reduce the volume size, use lvreduce (make it a bit bigger just in case):

umount /dev/vg1/volume_1
lvreduce -L 5.2T /dev/vg1/volume_1

Now that the logical volume has been reduced, use pvresize to reduce the physical volume size:

pvresize --setphysicalvolumesize 5.3T /dev/md0

If the resize fails, see this other question for moving the portions of data that were allocated at the end of the physical volume towards the beginning.

Now we have a 5.3T volume on a 10T array, so we can safely reduce the array size by 2T.

Array

Find out the md device:

 pvdisplay -C
 PV         VG      Fmt  Attr PSize   PFree
 /dev/md0   vg1     lvm2 a--  5.3t    0.1t

The first step is to tell mdadm to reduce the array size (with grow):

mdadm --grow -n10 /dev/md0
mdadm: this change will reduce the size of the array.
       use --grow --array-size first to truncate array.
       e.g. mdadm --grow /dev/md0 --array-size 9683819520

This is saying that in order to fit the current array onto 10 disks, we need to reduce the array size.

 mdadm --grow /dev/md0 --array-size 9683819520

Now it is smaller, we can reduce the number of disks:

 mdadm --grow -n10 /dev/md0 --backup-file /root/mdadm.md0.backup

This will take a loong time, and can be monitored here:

 cat /proc/mdstat

Personalities : [raid6] [raid5] [raid4]
md4 : active raid6 sda4[0] sdb4[1] sdc4[2] sdd4[3] sde4[4] sdf4[5] sdg4[6] sdh4[7] sdi4[1] sdj4[1] 
      [>....................]  reshape =  1.8% (9186496/484190976)
                              finish=821.3min speed=9638K/sec [UUUUUUUUUU__]

But we don't need to wait.

Resize the PV, LV and filesystem to maximum:

pvresize /dev/md0
lvextend -l 100%FREE /dev/vg1/volume_1
e2fsck -f /dev/vg1/volume_1
resize2fs /dev/vg1/volume_1

Set spare disks as spares

Nothing to do here, any spare disks in an array are automatically spares. Once your reshaping is complete, check the status:

cat /proc/mdstat

Personalities : [raid6] [raid5] [raid4]
md4 : active raid6 sda4[0] sdb4[1] sdc4[2] sdd4[3] sde4[4] sdf4[5] sdg4[6] sdh4[7] sdi4[S] sdj4[S] 
Share:
33,959

Related videos on Youtube

tyoc213
Author by

tyoc213

Updated on September 18, 2022

Comments

  • tyoc213
    tyoc213 almost 2 years

    I have a Synology NAS with 12 bays. Initially, we decided to allocate all 12 disks for a single RAID-6 volume, but now we would like to shrink the volume to use only 10 disks and assign two HDDs as spares.

    The Volume Manager Wizard can easily expand the volume by adding hard disks, but I have found no way to shrink the volume by removing hard disks. How can I do that without having to reinitialize the whole system?

    • Paul
      Paul over 9 years
      What is the goal here? Currently two disks are used as parity, and so the array can tolerate two failures. If you want two spares, you could just as well leave them nearby and have the same tolerance, but with more disk space.
    • tyoc213
      tyoc213 over 9 years
      Sure, but I have to go to the office, pop a disk out and insert a replacement disk. Having a spare allows to do this remotely.
    • Paul
      Paul over 9 years
      Does your Synology have MDADM built in if you ssh to it?
    • tyoc213
      tyoc213 over 9 years
      Yes, I've access to the mdadm tool.
  • tyoc213
    tyoc213 over 9 years
    Thanks a lot for these detailed instructions. I'll first wait for my RAID array to finish rebuilding after having replaced an HDD (total capacity: 17.86 TB, it's taking some time).
  • tyoc213
    tyoc213 over 9 years
    Also have a look at the mdadm cheat sheet (ducea.com/2009/03/08/mdadm-cheat-sheet).
  • Ramhound
    Ramhound over 6 years
    @Paul - superuser.com/questions/1274328/… flag this comment for removal after you determine if you can help the user
  • Ekleog
    Ekleog over 6 years
    Beware! I think this answer could lead to data loss, as is: there is no check that the lvm lv is indeed at the beginning of the pv! (which is not guaranteed with lvm). See unix.stackexchange.com/questions/67702/… (and unix.stackexchange.com/questions/67702/… in case of error) for a way to ensure the end of the PV is free to be shrinked.
  • Paul
    Paul over 6 years
    @Ekleog Thanks, this comment would be better placed as part of the answer in case missed
  • Ekleog
    Ekleog over 6 years
    @Paul Indeed, please feel free to add it :)
  • Paul
    Paul over 6 years
    @Ekleog I can't right now, on the move. Go ahead
  • Ekleog
    Ekleog over 6 years
    @Paul Oh didn't think I could, just sent a tentative edit :) I've improvised the columns output, but hopefully that's close enough what would have been reality were someone in your use case.
  • Scott Dudley
    Scott Dudley over 4 years
    I noticed that the lvreduce command doesn't seem to be installed anywhere with DSM 6.2. Running lvm lvreduce <args> seems to work though.