What is the best practice for adding disks in LVM

46,347

Solution 1

RHEL6 LVM Admin Guide

According to the RHEL 6 Logical Volume Administration Guide it's recommended that if you're going to use an entire drive as a physical volume in a LVM volume group, that you should still partition it:

excerpt from the guide "RHEL6 Logical Volume Manager Administration LVM Administrator Guide"

2.1.2. Multiple Partitions on a Disk

LVM allows you to create physical volumes out of disk partitions. It is generally recommended that you create a single partition that covers the whole disk to label as an LVM physical volume for the following reasons:

Administrative convenience

It is easier to keep track of the hardware in a system if each real disk only appears once. This becomes particularly true if a disk fails. In addition, multiple physical volumes on a single disk may cause a kernel warning about unknown partition types at boot-up.

LVM Howto

Section 11.1. Initializing disks or disk partitions of the LVM Howto states as follows:

excerpt from the LVM Howto

For entire disks:

Run pvcreate on the disk:

# pvcreate /dev/hdb

This creates a volume group descriptor at the start of disk.

Not Recommended

Using the whole disk as a PV (as opposed to a partition spanning the whole disk) is not recommended because of the management issues it can create. Any other OS that looks at the disk will not recognize the LVM metadata and display the disk as being free, so it is likely it will be overwritten. LVM itself will work fine with whole disk PVs.

If you get an error that LVM can't initialize a disk with a partition table on it, first make sure that the disk you are operating on is the correct one. If you are very sure that it is, run the following:

DANGEROUS

The following commands will destroy the partition table on the disk being operated on. Be very sure it is the correct disk.

# dd if=/dev/zero of=/dev/diskname bs=1k count=1
# blockdev --rereadpt /dev/diskname

Conclusions

These are the primary sources I would trust in determining whether you should format a single partition on a HDD prior to adding it as a physical volume or not. As other answers have indicated (and comments) you wouldn't be wrong in just adding the entire drive without a partition.

To me I liken it to driving in my car with my seat-belt on. If you never get in an accident then the seat-belt has served no purpose, but if I ever do get in an accident I"m sure glad I was wearing it.

Follow-up #1 (To @Joel's comments)

I thought the above 2 guides were 2 pretty good reasons. They're both official guides, one from RH the other a Howto put together by the LVM team.

Here's another reason. By not partitioning the HDD, no ID is being explicitly set on the HDD to clearly identify how it's being used.

 fdisk -l
 ...
/dev/sda6       318253056   956291071   319019008   8e  Linux LVM

As an administrator of systems, it's much more obvious to myself and others the intent of how this particular drive is being used vs. without the 8e.

I appreciate what you're saying @Joel, I too worked at a fortune 500 company where we had 100's of Linux deployments in both desktop/server physical/virtual deployments, as well as in large storage deployments, so I get what you're saying.

Solution 2

It's preferable to have some commonly recognized descriptors (meta-data) and MBR does quite stand as such a descriptor. Even GPT uses old MBR-based partition table to indicate its presence.

Indeed you lose some diskspace but it's rather negligible meanwhile advantage of understanding what's on the disk (and where) is self-evident.

Solution 3

Creating physical volumes on partitions that take up 100% of the disk is almost never the right thing to do. I say "almost" just because I take the attitude that just because I can't think of a reason to do something, that doesn't mean there's no reason to do it. That said, I can't think of a single reason to put partitions on a disk at 100% of the space if it's going to be LVM.

You're getting no discernible benefit in exchange for getting some of the rigidity of partitioning back. If these are SAN-backed physical volumes, and you do that, there are only two ways to expand the storage space in the volume group:

  1. Present a new larger LUN, add it to the volume group, pvmove off the LUN you inexplicably partitioned, remove it from the volume group, and tell the SAN people to unpresent it. Which might work, and can be done online (with a performance hit, and assuming they're enough SAN space in your storage pool on the SAN side to hold these two LUN's simultaneously) but it's doable.
  2. The only other way is to go back to dealing with partitions, which is part of the reason people like well-designed volume management schemes (like with btrfs, lvm, zfs, etc). You can edit the physical volume's partition table and hope partprobe let's you read the new sizes in, but that only works about 1 time out of 2 from my personal experience and it requires you to unmount the filesystem (i.e forces you to go offline another reason people like volume managers).

If you do a whole disk the SAN admin can expand the LUN out for you, you re-scan the SCSI bus, it picks up the new size of the LUN, then you do a pvresize to expand the physical volume out. All without taking any filesystems offline.

Going off the MBR bit, you don't typically take PV's from one system and present them to another in an enterprise environment. Even if you did, if it's LVM you're going to want the OS to which you're going to be presenting the LUN to support LVM. Otherwise what's the point of presenting it to them? If it does then you get to see all the physical volume information, volume group information and logical volumes (assuming this is the only PV in the volume group). So it self-documents that way.

Basically: partitioning a whole disk to 100% is like demanding that the waiter who brought you an apple pie also bring you a knife as well. When he does you throw the knife to the side and just bury your face in the pie. Meaning: it doesn't make sense to insist on a tool to portion something out into smaller pieces if you're just going to use it all in one go anyways.

Solution 4

From my experience, using partition will be good if you are testing or small environment where disk/storage is not available. It’s good for school, or working in your garage. In a real world, with virtual server where you can expand the disk upon demand, is better if you let LVM manage the raw/entire disk instead of partitioning. It will be easy and flexible to manage without rebooting your server. Do you know how much time that save? Multiply the for all the server that you may need to manage! Multiple times, I have challenges that due to the partition/slices you need to reboot the server as the kernel may not recognize the new table. When adding a raw disk/virtual disk to your LVM and there is a necessity to expand your file system LVM with raw disk is great. By running a simple command such as “echo 1 >/sys/block/XXX/device/rescan” where XXX is your disk (sdb, sdc, sdd, etc) will rescan the disk for the additional space without rebooting, and boom! you will be able extend your file system on the fly. It will take you litellary 5 minutes to extend a disk without rebooting your linux server. With partitioned disk this process is complicate

Share:
46,347

Related videos on Youtube

MacGyver
Author by

MacGyver

Updated on September 18, 2022

Comments

  • MacGyver
    MacGyver almost 2 years

    According to the Linux manpages you can add raw disks as well as partitions to a volume group.

    In other documentation (RedHat, CentOS or openSUSE), all examples refer to adding partitions to the VG instead of raw disks. What is common (best) practice?

  • frostschutz
    frostschutz about 11 years
    You may not even lose disk any disk space. Check pvdisplay, (PV SIZE: not usable X MiB), if X is larger than 1MiB you can just as well partition without losing any more.
  • Hauke Laging
    Hauke Laging about 11 years
    @frostschutz Well attentive but playing smart-ass: The space "available for wasting" is not "dev size modulo PE size" but "(dev size minus metadata space (384K)) modulo PE size". Whether that results in more or less depends, of course.
  • MacGyver
    MacGyver about 11 years
    Also LVM has a descriptor. Why first create a FAT-table and than the LVM Descriptor. LVM stores the metadata in the second sector. Th one thing i can think of to first create a fat is for disaster recovery or for some beginner-linux-administrators (administration of disks).
  • MacGyver
    MacGyver about 11 years
    HP-UX uses LVM too .. on this platform its common practice to add raw disks and let LVM do his thing on the disks. LVM2.x
  • frostschutz
    frostschutz about 11 years
    In theory you are not wrong, in practice you see everyday issues such as OS and installers (even Linux) offering to format this supposedly free disk - because they didn't recognize LVM. At the same time, there is no downside (performance-wise) to using partitions. So in a home user, desktop, multi-os environment, it's safer to stick to partitions.
  • Bratchley
    Bratchley about 11 years
    I haven't seen the issues with the installers myself. The kernel is supposed to do the equivalent of pvscan at boot, so the kernel on the installer disc should have scanned all block devices looking for LVM heads. I would probably file a bug with whoever the vendor is explaining that their installer is fubar'd. For home installs, the issue is the same, even if the root filesystem spans two disks your primary disk is going to be partitioned for /boot and when the kernel loads it will do the volume scan. That's how you're even able to boot to LVM.
  • Bratchley
    Bratchley about 11 years
    but on the downside part, there's no benefit of partitions outside of BIOS and grub support (hence /boot). Even for home users. There's also very little benefit (your HDD isn't going to get bigger), just a good habit to get in.
  • poige
    poige about 11 years
    @user39597, you're messing things up. FAT stands for File Allocation Table, that's MS-DOS thingy. MBR partition table is standard de-facto, lots of different tools are not aware of LVM would know that the disk is partitioned and occupied — it's precaution measure.
  • Bratchley
    Bratchley about 11 years
    Just because Red Hat says it Doesn't make it true . I've yet to hear a real reason why you would partition a disk and I can only think of reasons not to. Also "It is easier to keep track of the hardware in a system if each real disk only appears once." Is this really an issue? It's going to show up in fdisk -l once, in /sys/block once, and in your bios once. Where is it supposed to be duplicated? (cont'd)
  • Bratchley
    Bratchley about 11 years
    I'm speaking from personal experience, we just had to add space to a lot of volumes. They were partitioned so we ran into all sorts of problems where the kernel wasn't letting go of the partition table. So we were forced to do a re-partition then a reboot, creating a service outage for no good reason. as for "Any other OS that looks at the disk will not recognize the LVM metadata and display the disk as being free, so it is likely it will be overwritten" that's just not true at all. In windows it will just show up as an unused disk (but the admin knows better), what OS are they talking about?
  • MacGyver
    MacGyver about 11 years
    Thanks guys :) This discussion was really helpful to me.
  • MacGyver
    MacGyver about 11 years
    Thanks guys :) This discussion was really helpful to me.
  • slm
    slm about 11 years
    @MacGyver, you can see why this issue isn't that clear when you search for it. There's a history of issues that people have dealt with over time that may or may not still be relevant, there are issues with implementations of different tools that touch the HDDs, and then you have people using the LVM technology in different ways (personal vs. enterprise) that change the situation. The good news is that at least with Linux you're not completely locked out from doing what you want.
  • MacGyver
    MacGyver about 11 years
    @slm .. so true :)
  • Bratchley
    Bratchley about 11 years
    Regarding your update: "it's much more obvious to myself and others the intent of how this particular drive is being used vs. without the 8e." That's why you use fdisk -l AND pvs to get the aforementioned self-documentation, pvs will tell you more about it than just the fact that's it's LVM will, so you'll need to use both commands if you want to understand storage on the given machine, no matter what. BTW I'm not saying anyone is unknowledgable or anything, I'm just trying to keep bad information from getting out.
  • slm
    slm about 11 years
    @Joel - As am I. I appreciate the discussion. It's good that we can all bring our various practical experiences and try and provide better guidance on this particular topic than what's currently available on the webs. At a minimum we're at least pulling a lot of fragmented documentation into a single location. 8-).
  • MacGyver
    MacGyver about 11 years
    @slm that was my problem too ... sooo much information about LVM without the best practices. The best practices are crucial. I'm currently working with HP-UX and there the standard is adding raw (SAN) disks into the VG. HP-UX is about to end and we're moving to linux. Thats why i had these questions :)
  • Gert van den Berg
    Gert van den Berg about 7 years
    @Bratchley: The RH recommendation seems to mainly be "why not to create more than one LVM partition on the same disk"
  • Bratchley
    Bratchley about 7 years
    @GertvandenBerg I don't think that's a "RH" opinion as much as someone who doesn't work in the enterprise but happens to work at Red Hat. It's infinitely easier to go into VMware and expand a disk out and do a pvresize than to reboot the whole machine. Like seriously, if it weren't for grub, I would never want to deal with partitions ever again.
  • Gert van den Berg
    Gert van den Berg about 7 years
    @Bratchley: What I mean is that in the document, the follow paragraph deal with the disadvantages of multiple LVM partitions on a disk, not no partitions... (Which doesn't seem to be discussed)
  • roaima
    roaima about 5 years
    Your answer seems to concentrate more on the merits of using VMs rather than full disk allocations to LVM.
  • G-Man Says 'Reinstate Monica'
    G-Man Says 'Reinstate Monica' about 5 years
    @roaima: OK, my eyes are failing.  Where does this answer say anything about VMs?
  • roaima
    roaima about 5 years
    @G-Man the entire thing is about adding storage to a VM, and then being able to slice up the newly allocated disk using LVM in the VM itself without needing a reboot.
  • karatedog
    karatedog over 2 years
    As a consequence of not partitioning the entire disk before adding that partition to LVM is that you cannot increase the size of the disk later, to increase space in LVM. You can increase the disk size if it is a virtual HDD (say, in VMWare), even the OS will recognize the size increase. However all tool that expand space on a disk works on partitions which you don't have. And now you have disk without partition, increased in size, that cannot be used by LVM so if you want to tidy up things you have to add a new disk, move everything over to it and carefully delete/remove the old disk.