How to remove a disk from an lvm partition?

9,012

Solution 1

Since the filesystem you'll need the disk removed from is your root filesystem, and the filesystem type is ext4, you'll have to boot the system from some live Linux boot media first. Ubuntu Live would probably work just fine for this.

Once booted from the external media, run sudo vgchange -ay ubuntu-vg to activate the volume group so that you'll be able to access the LVs, but don't mount the filesystem: ext2/3/4 filesystems need to be unmounted for shrinking. Then shrink the filesystem to 10G (or whatever size you wish - it can easily be extended again later, even on-line):

sudo resize2fs /dev/mapper/ubuntu--vg-ubuntu--lv 10G

Pay attention to the messages output by resize2fs - if it says the filesystem cannot be shrunk that far, specify a bigger size and try again.

This is the only step that needs to be done while booted on the external media; for everything after this point, you can boot the system normally.

At this point, the filesystem should have been shrunk to 10G (or whatever size you specified). The next step is to shrink the LV. It is vitally important that the new size of the LV should be exactly the same or greater than the new size of the filesystem! You don't want to cut off the tail end of the filesystem when shrinking the LV. It's safest to specify a slightly bigger size here:

sudo lvreduce -L 15G /dev/mapper/ubuntu--vg-ubuntu--lv

Now, use pvdisplay or pvs to see if LVM now considers /dev/sdb1 totally free or not. In pvdisplay, the Total PE and Free PE values for sdb1 should be equal - in pvs output, the PFree value should equal PSize respectively. If this is not the case, then it will be time to use pvmove:

sudo pvmove /dev/sdb1

After this, the sdb1 PV should definitely be totally free according to LVM and it can be reduced out of the VG.

sudo vgreduce vg-ubuntu /dev/sdb1

If you wish, you can then remove the LVM signature from the ex-PV:

sudo pvremove /dev/sdb1

But if you are going to overwrite it anyway, you can omit this step.

After these steps, the shrunken filesystem will still be sized at 10G (or whatever you specified) even though the LV might be somewhat bigger than that. To fix that:

sudo resize2fs /dev/mapper/ubuntu--vg-ubuntu--lv

When extending a filesystem, you don't have to specify a size: the tool will automatically extend the filesystem to match the exact size of the innermost device containing it. In this case, the filesystem will be sized according to the size of the LV.

Later, if you wish to extend the LV+filesystem, you can do it with just two commands:

sudo lvextend -L <new size> /dev/mapper/ubuntu--vg-ubuntu--lv
sudo resize2fs /dev/mapper/ubuntu--vg-ubuntu--lv

You can do this even while the filesystem is in use and mounted. Because shrinking a filesystem is harder than extending it, it might be useful to hold some amount of unallocated space in reserve at the LVM level - you will be able to use it at a moment's notice to create new LVs and/or to extend existing LVs in the same VG as needed.

Solution 2

Add your comment to your question.

If you run vgreduce ubuntu-vg /dev/sdb1, and it gives the message /dev/sdb is still in use then that means that there is data on it and that you can't remove it without causing issues.

Otherwise, it will successfully remove it from the volume group and you can then run pvremove /dev/sdb1 to remove the LVM labels from it and then remove the disk from the machine and use it elsewhere.

You can use pvmove /dev/sdb1 but if you get No extents available for allocation, that could mean that there simply aren't any free areas on the disk to move it.

If you run pvdisplay -m, then you can see the mapping data for the physical volumes including the physical extents. For example, if you run it and see FREE under Physical Extents, then you can run pvmove -v /dev/sdb1:<[physical_extent_with_data> /dev/sda3/:<physical_extent_free> --alloc anywhere. In your case, it doesn't look like it's going to work because the output of pvdisplay is showing that they are full which is why you are getting the No extents available for allocation message.

Before you do any of this, make sure that you have backed up your data. It's looking like your going to have start all over again if you want to remove that disk unless you can use lvreduce. In the future, I recommend creating multiple volume groups so that you only have to rebuild the one with the system installation.

Share:
9,012
Mark Smith
Author by

Mark Smith

Updated on September 18, 2022

Comments

  • Mark Smith
    Mark Smith over 1 year

    Can anyone help ? I have 2 disks spanning my main partitions. 1 is 460Gb and the other is a 1TB. I would like to remove the 1TB - I would like to use it in another machine.

    The volume group isn't using a lot of space anyway, I only have docker with a few containers using that disk and my docker container volumes are on a different physical disk anyway.

    If I just remove the disk ([physically]), it is going to cause problems right?

    Here is some info

    
    pvdisplay
    
    
      --- Physical volume ---
      PV Name               /dev/sda3
      VG Name               ubuntu-vg
      PV Size               <464.26 GiB / not usable 2.00 MiB
      Allocatable           yes (but full)
      PE Size               4.00 MiB
      Total PE              118850
      Free PE               0
      Allocated PE          118850
      PV UUID               DA7Q8E-zJEz-2FzO-N64t-HtU3-2Z8P-UQydU4
    
      --- Physical volume ---
      PV Name               /dev/sdb1
      VG Name               ubuntu-vg
      PV Size               931.51 GiB / not usable 4.69 MiB
      Allocatable           yes (but full)
      PE Size               4.00 MiB
      Total PE              238466
      Free PE               0
      Allocated PE          238466
      PV UUID               Sp6b1v-nOj2-XXdb-GZYf-1Vej-cfdr-qLB3GU
    

    LVM confuses me a little :-)

    Is there not just a simple case of saying,

    "remove yourself from the VG and assing anything you are using the remaining group member" ?

    Its worth noting that the 1TB was added afterwards, so assume its easier to remove ?

    Any help really appreciated

    EDIT

    Also some more info

    df -h
    Filesystem                         Size  Used Avail Use% Mounted on
    udev                                16G     0   16G   0% /dev
    tmpfs                              3.2G  1.4M  3.2G   1% /run
    /dev/mapper/ubuntu--vg-ubuntu--lv  1.4T  5.1G  1.3T   1% /
    

    It sames its using only 1%

    also output of lvs

    lvs
      LV        VG        Attr       LSize Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
      ubuntu-lv ubuntu-vg -wi-ao---- 1.36t
    

    EDIT

    pvdisplay -m
      --- Physical volume ---
      PV Name               /dev/sda3
      VG Name               ubuntu-vg
      PV Size               <464.26 GiB / not usable 2.00 MiB
      Allocatable           yes (but full)
      PE Size               4.00 MiB
      Total PE              118850
      Free PE               0
      Allocated PE          118850
      PV UUID               DA7Q8E-zJEz-2FzO-N64t-HtU3-2Z8P-UQydU4
    
      --- Physical Segments ---
      Physical extent 0 to 118849:
        Logical volume  /dev/ubuntu-vg/ubuntu-lv
        Logical extents 0 to 118849
    
      --- Physical volume ---
      PV Name               /dev/sdb1
      VG Name               ubuntu-vg
      PV Size               931.51 GiB / not usable 4.69 MiB
      Allocatable           NO
      PE Size               4.00 MiB
      Total PE              238466
      Free PE               0
      Allocated PE          238466
      PV UUID               Sp6b1v-nOj2-XXdb-GZYf-1Vej-cfdr-qLB3GU
    
      --- Physical Segments ---
      Physical extent 0 to 238465:
        Logical volume  /dev/ubuntu-vg/ubuntu-lv
        Logical extents 118850 to 357315
    

    EDIT

    Output of

    lsblk -f
    NAME   FSTYPE     LABEL UUID                                   MOUNTPOINT
    loop0  squashfs                                                /snap/core/9066
    loop2  squashfs                                                /snap/core/9289
    sda
    ├─sda1 vfat             E6CC-2695                              /boot/efi
    ├─sda2 ext4             0909ad53-d6a7-48c7-b998-ac36c8f629b7   /boot
    └─sda3 LVM2_membe       DA7Q8E-zJEz-2FzO-N64t-HtU3-2Z8P-UQydU4
      └─ubuntu--vg-ubuntu--lv
           ext4             b64f2bf4-cd6c-4c21-9009-76faa2627a6b   /
    sdb
    └─sdb1 LVM2_membe       Sp6b1v-nOj2-XXdb-GZYf-1Vej-cfdr-qLB3GU
      └─ubuntu--vg-ubuntu--lv
           ext4             b64f2bf4-cd6c-4c21-9009-76faa2627a6b   /
    sdc    xfs              1a9d0e4e-5cec-49f3-9634-37021f65da38   /gluster/bricks/2
    
    

    sdc above is a different drive - and not related.

  • Mark Smith
    Mark Smith almost 4 years
    vgreduce ubuntu-vg /dev/sdb1 Physical volume "/dev/sdb1" still in use
  • Mark Smith
    Mark Smith almost 4 years
    So how do I get access to that data and move it out. I mean I can't do a cd /dev/sdb1
  • telcoM
    telcoM almost 4 years
    pvmove is the command you need here: if you run pvmove /dev/sdb1 just like that, without specifying a destination, it means "If possible, make /dev/sdb1 empty for me by moving any data in there to other PVs in the same VG." It's like it was specifically designed for just this situation... :-)
  • Mark Smith
    Mark Smith almost 4 years
    @telcoM pvmove /dev/sdb1 No extents available for allocation
  • Mark Smith
    Mark Smith almost 4 years
    Updated question with pvdisplay -m
  • Mark Smith
    Mark Smith almost 4 years
    Excellent! Thanks for all the help here, it worked!