What happens when a disk fails in LVM?

8,527

You've omitted a number of important abstraction concepts that come with LVM. Logical volumes do not handle disks - they are placed on volume groups. VGs in turn consist of physical volumes which can be disks. Cutting a long story short, the VG would not come up with a missing PV - i.e. a missing disk, so you would not able to access the logical volumes on the group.

There are recovery procedures, but usually, in a virtualized environment, you would see "all-or-nothing" availability anyway - all disk files would be contained in a single directory which is either accessible with its entire content or not at all (if the datastore is not available for example).

As for the storage efficiency, consider using thin provisioning - "unused" space is not claimed on the datastore. However, it comes at the cost of higher administrative overhead.

Share:
8,527

Related videos on Youtube

Stew
Author by

Stew

Updated on September 18, 2022

Comments

  • Stew
    Stew almost 2 years

    I am configuring a linux server on an ESX 4.1 host. This server needs to have several TBs of data stored on it. We are currently debating whether or not to use LVM. Our current reasoning is that is is best to have multiple 2TB volumes (a limit imposed by ESX) mounted onto separate volumes as such.

    /disk1 - 2TB
    
    /disk2 - 2TB
    
    /disk3 - 2TB
    

    We will be storing directories that range in size from 100GB to 400GB. These directories need to be stored in their whole and cannot be split up. The concern is that there will be a lot of wasted space if we end up having 1.7TB stored on /disk1 and need to store an additional 400GB. In which case we would need to store the 400GB directory on /disk2, leaving 300GB unused.

    One solution to this problem is LVM, configured as:

     --------
     Disk 1 | 
            |
     Disk 2 |---->/disk
            | 
     Disk 3 | 
     --------
    

    However we are stuck on one simple question. What happens if Disk 2 fails?

    In the first scenario it is obvious what happens if Disk 2 fails, /disk2 would no longer be accessible.

    In the LVM setup, if Disk 2 were to fail, would it be similar (as in, only the data that was stored on Disk 2 is no longer available) or would all data on /disk no longer be accessible?

    • Chopper3
      Chopper3 over 12 years
      If you use ESXi v5 you can use >2TB RDMs to get around this problem - it's what I do.
  • Stew
    Stew over 12 years
    Well that answers the base question, that availability is all or nothing with LVM. I have used LVM before, but I have (luckily) never gone through the recovery procedures. As for thin provisioning, that isn't really a solution. We have a set of LUNs (2TB each) which are dedicated to this server. Even if we have thin provisioning it doesn't really address the issue of data needing to span multiple 2TB volumes. It looks like the only two options are either upgrading to ESXi 5 (which was in the plans anyway) or living with some wasted space. Thanks for the answer.
  • the-wabbit
    the-wabbit over 12 years
    why do you have a set of 2 TB LUNs instead of a single LUN in a larger VMFS datastore where you create your three virtual disks? The 2 TB limit only applies to virtual disks, not the size of the datastore itself.
  • Stew
    Stew over 12 years
    How does that work? I have had a lot of trouble (very cryptic errors) when attempting to mount a LUN larger than 2TB. Also, the equallogic plugin for vmware has a 2TB limit for LUN size (though that limit isn't enforced when creating LUNs through the web interface). Outside of perhaps easier management is there any other advantage to having a large datastore?
  • the-wabbit
    the-wabbit over 12 years
    You would need to work with "VMFS extents" - create a VMFS of 2 TB size, extend it in 2 TB steps after creation. You can have up to 32 extents, resulting a total upper limit of 64 TB per VMFS. I am not familiar with the equallogic plugin so I can't tell anything about its limitations. And "easier management" is what virtualization is all about in my opinion, so there needs not to be any other advantage :) BTW, you also could consider let the virtual host access the storage LUN directly - easy if you have iSCSI, not quite so easy with fibre channel.
  • Stew
    Stew over 12 years
    I have thought about using raw device mappings, seemed like a good solution but I actually had no idea about VMFS extents. Going to research that immediately. Thanks for the update this is great info!
  • Max
    Max almost 4 years
    The link to the recovery procedure needs https now : novell.com/coolsolutions/appnote/19386.html