Adding Disks With LVM

84,299

Solution 1

After reviewing a combination of random guides and tutorials on the net, I was able to successfully add a disk to my Ubuntu Server 14.04 machine, and essentially set it up so I have multiple hard drives appearing as one single drive. To do this, I used LVM.

To help anyone else who might want to do this at some point, I will post here what I did.


These steps assume that you are essentially starting from scratch except having already installed Ubuntu on your machine (via "Guided - use the entire disk and setup LVM"), and physically added the additional disk. These steps may work if you have existing data on the machine but I can't say for sure if it would be safe to do this.

These commands assume the following information, and will vary depending on your setup:

  • Your new disk is 'sdb'
    • This can be found by running ls /dev/sd*
  • That your volume group name is 'ubuntu-vg'
    • This can be found by running vgdisplay
  • That your logical volume path is '/dev/ubuntu-vg/root'
    • This can be found by running lvdisplay
  • Your new disk is 20GB
    • Hopefully you know how big the disk is.

  1. Install Logical Volume Manager (you may or may not need to do this).

    sudo apt-get install system-config-lvm
    
  2. Convert your new disk to a physical volume (in this case, the new disk is 'sdb').

    sudo pvcreate /dev/sdb
    
  3. Add the physical volume to the volume group via 'vgextend'.

    sudo vgextend ubuntu-vg /dev/sdb
    
  4. Allocate the physical volume to a logical volume (extend the volume size by your new disk size).

    sudo lvextend -l +100%FREE /dev/ubuntu-vg/root
    
  5. Resize the file system on the logical volume so it uses the additional space.

    sudo resize2fs /dev/ubuntu-vg/root
    

That should do it. Five simple steps! You also don't have to reboot. Just run df -h and your new disk space should show allocated correctly, as well as any webapps you may be running will pick up the new disk space amount.

Solution 2

This technique worked for me on a 128GB SSD primary and 2TB HDD extension.
If you run into an issue using "ubuntu-vg" when adding a physical volume to the volume group, try issuing the command

sudo vgdisplay 

Typically the format of the name of the volume group is NAME_OF_COMPUTER-vg, so if your system is named SKYNET your volume group would likely be named

SKYNET-vg

Solution 3

I attempted to set up a large LVM disk in 14.04 64 bit Desktop with 3X500GB SATA drives. It failed during the installation with device errors. I found a link that states drives over 256G are the limit of the extents but I dont know if that applies here.

I also attempted to set up RAID (RAID 1 /boot 300MB, RAID 0 swap 2GB, and / RAID 5 everything else. More failures.

$ sudo apt-get install -y mdadm

From the Live CD "Try Ubuntu Without Installing" option you can still install MDADM. Still no luck. The GParted detection seems to be slightly re-Tahrded and doesnt pick up some volumes in LVM or some volumes in RAID /dev/mdX unless everything has been given a filesystem already;

$ sudo mkfs.etx4 /dev/md2

Also, the RAID configs present even more challenges now. MDADM doesnt seem to be added to the /target/usr/sbin package list of the install any more, and installing it there so the installation starts on reboot at all would be a huge ordeal, for which I simply dont have the time or patience, only to find out that a few more hours of work later it still didnt start on these new Windows 8 performance hacked motherboards (UEFI) for a GRUB issue.

Installing LVM from Ubiquity works great, until you need to add more disks to the / (root partition, at which point you stand a very good chance of blowing the entire install. LVM resize operations keep failing and you end up back at square 1 again.

Trying the 14.04 server installer Partman saves the day.

Booted up the 14.04 Server installer, it identified the architectures just fine, installed MDADM, grub was installed to all 3 disks, and everything works great.

3 disks (500GB SATA)

3 partitions each. All partitions set to Linux Raid type in fdisk.

RAID 1 /boot, 300MB partitions, RAID 0 swap, 2GB partitions, and RAID 5 /, 500GB (whatever is left.)

$ sudo fdisk -l
Device Boot Start End Blocks Id System
/dev/sda1 2048 616447 307200 83 Linux
/dev/sda2 616448 4810751 2097152 83 Linux
/dev/sda3 4810752 976773167 485981208 fd Linux raid autodetect

Device Boot Start End Blocks Id System
/dev/sdc1 * 2048 616447 307200 83 Linux
/dev/sdc2 616448 4810751 2097152 83 Linux
/dev/sdc3 4810752 976773167 485981208 fd Linux raid autodetect

Device Boot Start End Blocks Id System
/dev/sdb1 2048 616447 307200 83 Linux
/dev/sdb2 616448 4810751 2097152 83 Linux
/dev/sdb3 4810752 976773167 485981208 fd Linux raid autodetect
...

$ sudo ls /dev/md*
/dev/md0 /dev/md1 /dev/md2

/dev/md:
0 1 2

$ sudo mdadm -D /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Wed Aug 6 13:03:01 2014
Raid Level : raid1
Array Size : 306880 (299.74 MiB 314.25 MB)
Used Dev Size : 306880 (299.74 MiB 314.25 MB)
Raid Devices : 3
Total Devices : 3
Persistence : Superblock is persistent

Update Time : Mon Aug 11 19:51:44 2014  
      State : clean   

Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0

      Name : ubuntu:0
      UUID : 03a4f230:82f50f13:13d52929:73139517
    Events : 19

Number Major Minor RaidDevice State 0 8 1 0 active sync /dev/sda1 1 8 17 1 active sync /dev/sdb1 2 8 33 2 active sync /dev/sdc1

$ sudo mdadm -D /dev/md1 /dev/md1: Version : 1.2 Creation Time : Wed Aug 6 13:03:31 2014 Raid Level : raid0 Array Size : 6289920 (6.00 GiB 6.44 GB) Raid Devices : 3 Total Devices : 3 Persistence : Superblock is persistent

Update Time : Wed Aug 6 13:03:31 2014 State : clean Active Devices : 3 Working Devices : 3 Failed Devices : 0 Spare Devices : 0

Chunk Size : 512K

     Name : ubuntu:1
      UUID : 9843bdd3:7de01b63:73593716:aa2cb882
    Events : 0

Number Major Minor RaidDevice State 0 8 2 0 active sync /dev/sda2 1 8 18 1 active sync /dev/sdb2 2 8 34 2 active sync /dev/sdc2

$ sudo mdadm -D /dev/md2 /dev/md2: Version : 1.2 Creation Time : Wed Aug 6 13:03:50 2014 Raid Level : raid5 Array Size : 971699200 (926.68 GiB 995.02 GB) Used Dev Size : 485849600 (463.34 GiB 497.51 GB) Raid Devices : 3 Total Devices : 3 Persistence : Superblock is persistent

Update Time : Mon Aug 11 19:54:49 2014 State : active Active Devices : 3 Working Devices : 3 Failed Devices : 0 Spare Devices : 0

    Layout : left-symmetric
Chunk Size : 512K

      Name : ubuntu:2
      UUID : 6ead2827:3ef088c5:a4f9d550:8cd86a1a
    Events : 14815

Number Major Minor RaidDevice State 0 8 3 0 active sync /dev/sda3 1 8 19 1 active sync /dev/sdb3 3 8 35 2 active sync /dev/sdc3

$ sudo cat /etc/fstab
'# /etc/fstab: static file system information.'
'#'
'# Use 'blkid' to print the universally unique identifier for a'
'# device; this may be used with UUID= as a more robust way to name devices'
'# that works even if disks are added and removed. See fstab(5).'
'#'
'# '
'# / was on /dev/md126 during installation'
UUID=2af45208-3763-4cd2-b199-e925e316bab9 / ext4 errors=remount-ro 0 1
'# /boot was on /dev/md125 during installation'
UUID=954e752b-30e2-4725-821a-e143ceaa6ae5 /boot ext4 defaults 0 2
'# swap was on /dev/md127 during installation'
UUID=fb81179a-6d2d-450d-8d19-3cb3bde4d28a none swap sw 0 0

Running like a thoroughbred now.

It occurs to me that if you are using 32 bit hardware this doesn't work for you, but I think at this point soft RAID might be a worse choice than just single disk LVM for anything smaller, and JBOD for anything older than this anyway.

Thanks.

Share:
84,299

Related videos on Youtube

oink
Author by

oink

Updated on September 18, 2022

Comments

  • oink
    oink over 1 year

    I'm sure this has been answered somewhere on here before (I even found kinda a guide here, but seemed to be a bit spotty and incomplete) but I was wondering if someone could assist me or at least point me in the right direction to get what I'm trying done accomplished.

    Basically I installed Ubuntu 14.04 (via "Guided - use the entire disk and setup LVM") on a 20GB disk. I then physically added a clean 80GB disk to the machine, which it detects as 'sdb'.

    Basically my question is, I want to be able to add/combine the allocated amount of space from the new disk (80GB) to my machine so that instead of showing two drives (20GB and 80GB), it simply shows one drive (100GB). I'm not worried about RAID or any other special add-ons.

    I'm somewhat new to Linux, but understand that I need to use LVM to accomplish this.

    If there is anyone who can help me out or link me to a helpful guide/tutorial, it would be very much appreciated! Not sure if this is needed either, but here is my 'fdisk -l' and '/etc/fstab' output:

    fdisk output (shortened):

    Disk /dev/sda: 21.5 GB, 21474836480 bytes
       Device Boot      Start         End      Blocks   Id  System
    /dev/sda1    *       2048      499711      248832   83  Linux
    /dev/sda2          501758    41940991    20719617    5  Extended
    /dev/sda5          501760    41940991    20719616   8e  Linux LVM
    
    Disk /dev/sdb: 85.9 GB, 85899345920 bytes
    Disk /dev/sdb doesn't contain a valid partition table
    
    Disk /dev/mapper/ubuntu--vg-root: 20.4 GB, 20392706048 bytes
    Disk /dev/mapper/ubuntu--vg-root doesn't contain a valid partition table
    
    Disk /dev/mapper/ubuntu--vg-swap_1: 801 MB, 801112064 bytes
    Disk /dev/mapper/ubuntu--vg-swap_1 doesn't contain a valid partition table
    

    /etc/fstab (shortened):

    # <file system> <mount point>   <type>  <options>       <dump>  <pass>
    /dev/mapper/ubuntu--vg-root /               ext4    errors=remount-ro 0       1
    # /boot was on /dev/sda1 during installation
    UUID=26710773-7a64-4f34-a34e-0057cb1739d7 /boot           ext2    defaults        0       2
    /dev/mapper/ubuntu--vg-swap_1 none            swap    sw              0       0
    
  • spyderdyne
    spyderdyne over 9 years
    Partman saves the day. Booted up the 14.04 Server installer, it identified the architectures just fine, installed MDADM, and everything works great. Here is a summary of the setup in case it's useful;.
  • nathancahill
    nathancahill about 9 years
    You can use lvextend -l +100%FREE to extend to use all free space, instead of lvextend -L+20G
  • O. Jones
    O. Jones over 7 years
    You can use cat /proc/partitions; /sbin/rescan-scsi-bus; cat /proc/partitions to find the name, like sdb, of a newly installed drive.
  • Rod Smith
    Rod Smith almost 7 years
    This should work; however, I caution against using a whole disk as a logical volume. Instead, I recommend partitioning the disk and creating the LVM within one or more partition(s) on that disk. This procedure provides flexibility in the future, should you want or need some non-LVM space in the future. It may also work better if you run into a tool that assumes all disks are partitioned. I know of no important examples of such tools, but you never know what assumptions might crop up in some random utility you might want or need to run in the future.
  • mtalexan
    mtalexan almost 7 years
    From Rod Smith's comment, that means mechanically you need to run fdisk on your /dev/sdb first, allocate all the space to a new partition, set the partition as a "Linux LVM" (type 8e), then replace all the /dev/sdb entries in your instructions with /dev/sdb1
  • erikbstack
    erikbstack over 6 years
    great guide. checking available disks is more beautifully done with lsblk though. Testing on RHEL7 though.
  • Stéphane
    Stéphane over 4 years
    Note that Ubuntu server seems to default to a logical volume path of /dev/ubuntu-vg/ubuntu-lv instead of /dev/ubuntu-vg/root. So check with sudo lvdisplay or ls /dev/ubuntu-vg/. For example, I had to use this for my step #4: sudo lvextend -l +100%FREE /dev/ubuntu-vg/ubuntu-lv