How to remove bad disk from LVM2 with the less data loss on other PVs?
56,946
Solution 1
# pvdisplay
Couldn't find device with uuid EvbqlT-AUsZ-MfKi-ZSOz-Lh6L-Y3xC-KiLcYx.
--- Physical volume ---
PV Name /dev/sdb1
VG Name vg_srvlinux
PV Size 931.51 GiB / not usable 4.00 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 238466
Free PE 0
Allocated PE 238466
PV UUID xhwmxE-27ue-dHYC-xAk8-Xh37-ov3t-frl20d
--- Physical volume ---
PV Name unknown device
VG Name vg_srvlinux
PV Size 465.76 GiB / not usable 3.00 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 119234
Free PE 0
Allocated PE 119234
PV UUID EvbqlT-AUsZ-MfKi-ZSOz-Lh6L-Y3xC-KiLcYx
# vgreduce --removemissing --force vg_srvlinux
Couldn't find device with uuid EvbqlT-AUsZ-MfKi-ZSOz-Lh6L-Y3xC-KiLcYx.
Removing partial LV LogVol00.
Logical volume "LogVol00" successfully removed
Wrote out consistent volume group vg_srvlinux
# pvdisplay
--- Physical volume ---
PV Name /dev/sdb1
VG Name vg_srvlinux
PV Size 931.51 GiB / not usable 4.00 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 238466
Free PE 238466
Allocated PE 0
PV UUID xhwmxE-27ue-dHYC-xAk8-Xh37-ov3t-frl20d
now all it work fine!
Solution 2
From the vgreduce
man page:
--removemissing
Removes all missing physical volumes from the volume group, if there are no logical volumes
allocated on those. This resumes normal operation of the volume group (new logical volumes
may again be created, changed and so on).
If this is not possible (there are logical volumes referencing the missing physical volumes)
and you cannot or do not want to remove them manually, you can run this option with --force
to have vgreduce remove any partial LVs.
Any logical volumes and dependent snapshots that were partly on the missing disks get removed
completely. This includes those parts that lie on disks that are still present.
If your logical volumes spanned several disks including the ones that are lost, you might
want to try to salvage data first by activating your logical volumes with --partial as
described in lvm (8).
Related videos on Youtube
Author by
kissgyorgy
Updated on September 18, 2022Comments
-
kissgyorgy almost 2 years
I had a LVM2 volume with two disks. The larger disk became corrupt, so I cant pvmove. What is the best way to remove it from the group to save the most data from the other disk? Here is my pvdisplay output:
Couldn't find device with uuid WWeM0m-MLX2-o0da-tf7q-fJJu-eiGl-e7UmM3. --- Physical volume --- PV Name unknown device VG Name media PV Size 1,82 TiB / not usable 1,05 MiB Allocatable yes (but full) PE Size 4,00 MiB Total PE 476932 Free PE 0 Allocated PE 476932 PV UUID WWeM0m-MLX2-o0da-tf7q-fJJu-eiGl-e7UmM3 --- Physical volume --- PV Name /dev/sdb1 VG Name media PV Size 931,51 GiB / not usable 3,19 MiB Allocatable yes (but full) PE Size 4,00 MiB Total PE 238466 Free PE 0 Allocated PE 238466 PV UUID oUhOcR-uYjc-rNTv-LNBm-Z9VY-TJJ5-SYezce
So I want to remove the unknown device (not present in the system). Is it possible to do this without a new disk ? The filesystem is ext4.
-
kissgyorgy over 12 yearsif I would do a
vgreduce --removemissing --force media
what would happen ?
-
-
Steve Townsend almost 11 yearsYeah... hope you didn't need LogVol00... it's gone now.
-
kissgyorgy almost 11 yearsBetter then losing everything...
-
Aquarius Power over 9 yearsoh.. so this is the way to recover from one missing mirror leg
vgreduce --removemissing --force $vgname
? -
Aquarius Power over 9 yearsso basically, if my root
/
has a mirror leg, and that mirror fails, I think the boot will fail, then, with a live distro iso, I can run that command to access again my system? so, also, I think the most safe is to have/boot
outside of lvm, on a simple 2GB ext4 partition, with the live distro iso? -
psusi over 9 years@AquariusPower, boot should not fail if one leg of the mirror is missing. Personally I prefer to use
mdadm
to handle the raid and lvm on top just to divide the array up into logical volumes. Booting directly from the raid array instead of having a stand alone /boot means the system can still boot up just fine if the primary boot disk dies. -
Aquarius Power over 9 yearsmmm... I have, on each PV, a small partition for boot, but each partition is independent; so if I put these
/boot
in sync with raid, I can probably fast boot if any of these fail; I like this thanks :), I also guess you prefer mdadm as (maybe?) lvm mirror sync can be slow and not sync enough data in time to ensure safe seamless boot in case one PV fails (like on a blackout). -
psusi over 9 years@AquariusPower, actually I prefer
mdadm
for the raid both because I prefer raid10 over raid1, and because it can reshape the array ( lvm can't convert a 2 disk mirror into a 3 disk raid5 for instance ). -
jackohug over 7 yearsIf the disk is bad then the data from the logical volume LogVol00 is already gone. Removing it from the group hasn't removed any more data. Besides, that's what backups are for.
-
dannyman about 7 yearsThis has proven useful to me on multiple occasions now in managing ganeti with drbd.