Convert Hyper-V linux machine to Gen 2

5,442

Since this was top result when I tried to find an answer, I'm posting my own solution, even though it's 6+ years since question. (btw fercasas.com in old comments is no longer available so couldn't say if it was helpful or not)

Intro & notes

Anyway, the real issue is BIOS vs UEFI, while IDE vs SCSI didn't throw anything extra my way.

So in short - answer is same as converting BIOS PC to UEFI PC.

Little bit more details below.

Note #1: I did this with Ubuntu 20.04, Ubuntu 16.04 and Centos 7, I need to repeat this with Ubuntu 18.04, if it differs in important ways I'll edit answer.

Note #2: As always, your partitioning scheme and disk names will probably be different than in my examples, PLEASE modify accordingly, I am in no way responsible for data loss if you format wrong partition! You can check partitions while in your original OS using fdisk -l and/or cat /etc/fstab

Note #3: I recommend to backup whole VM before this procedure, for example using "Export" function in Hyper-V Manager

Ubuntu 20.04

Following is instructions based on my Ubuntu 20.04 VMs (installed fresh for this purpose, default install settings):

# boot your Gen 1 VM
# note: all steps done as root, so sudo su first
    sudo su

# install grub efi version for later use, just in case; note: it will remove other version on install
    apt-get install -y grub-efi
# backup current boot files
# make /boot2 folder and copy everything from /boot to /boot2 (for backup, safe keeping, later use)
    mkdir -p /boot2
    cp -r /boot/* /boot2

# delete old VM in Hyper-V Manager, but keep the VHDX file(s) (also, remember to export/backup before trying this all)
# create new Gen2 VM with same settings, and attach existing VHDX file(s)
# Add DVD drive and make it first device, attach same ISO image you used to install this OS
# boot LiveCD, or server install + shell, or similar, it's important to boot in Gen2/EFI mode
# if you boot in Server installer, pick Help, Enter shell

# Prepare partitions and mounts
# format old boot partition to FAT
    mkfs -t vfat /dev/sda2
# create mountpoints
    mkdir -p /mnt/boot
    mkdir -p /mnt/root
# mount them
    mount /dev/sda2 /mnt/boot/
    mount /dev/ubuntu-vg/ubuntu-lv /mnt/root/
#       OR
    mount /dev/mapper/ubuntu--vg-ubuntu--lv /mnt/root/

# copy files from old (BIOS) /boot2 backup, one you made before formatting
# copy backup files back to /boot
    cp -r /mnt/root/boot2/* /mnt/boot/
# install EFI grub
    apt-get install -y grub-efi
    grub-install --force --target=x86_64-efi --boot-directory=/mnt/boot --efi-directory=/mnt/boot /dev/sda
# should reply:     - No error reported
# edit fstab
    nano /mnt/root/etc/fstab
# change UUIDs to what says in comments above them like "was on .... during curtin installation".. so use that "....", for example
# also /boot needs to be changed from ext2/ext4/whatever to "vfat", so like this
    /dev/ubuntu-vg/ubuntu-lv / ext4 defaults 0 0
    /dev/sda2 /boot vfat umask=0077 0 0
# keep other entries (eg swap) as is, or if you know what you're doing change them in similar way, or find new UUIDs, etc.

# now we can shut down VM, unmount DVD/ISO, and get it ready for normal boot
    poweroff
# eject media + enter
# turn off VM

# start again by doing: Connect, Start

# after successful reboot, install and update grub, to have correct and fresh system with current mounts etc
# if you had to manually fix boot, first fix whatever was wrong (like fstab, or whatever), then do this
    grub-install /dev/sda --efi-directory=/boot
    update-grub
    reboot

That's mostly it with Ubuntu 20.04. I tried installing new kernel after this procedure, and after update-grub and reboot, I could see new entries. So future updates should be OK as well.

In case of trouble do not despair! If your first attempts go wrong and you end up in grub > prompt, you can fix it by doing something roughly like this:

# if you reboot to grub rescue prompt, you can still fix everything, no need to return to original export/backup files !
# enter these commands
# try this first
    configfile (hd0,gpt2)/grub/grub.cfg
# if above didn't start boot, try manually like this, instead "-x.x.x-xx-generic" enter your current kernel version, use tab to autocomplete
    set root=(hd0,gpt2)
    insmod linux
    linux /vmlinuz-x.x.x-xx-generic root=/dev/mapper/ubuntu--vg-ubuntu--lv
    initrd /initrd.img-x.x.x-xx-generic
    boot

After this you should end up in your normal OS instance, just sudo as root again, and repeat the grub-install and update-grub as explained above.

You can also force creation of grub.cfg file on Ubuntu with following command:

grub-mkconfig -o /boot/grub/grub.cfg

Ubuntu 16.04

This is for Ubuntu 16.04. It is similar, practically same, but I modified it to fit the workflow of "Rescue" mode available from stock 16.04 boot ISO image. "Rescue" mode puts you in chroot, so I had to modify paths and removed few steps that aren't needed.

# boot your Gen 1 VM
# note: all steps done as root, so sudo su first
    sudo su

# install grub efi version for later use, just in case; note: it will remove other version on install
    apt-get install -y grub-efi
# backup current boot files
# make /boot2 folder and copy everything from /boot to /boot2 (for backup, safe keeping, later use)
    mkdir -p /boot2
    cp -r /boot/* /boot2

# delete old VM in Hyper-V Manager, but keep the VHDX file(s) (also, remember to export/backup before trying this all)
# create new Gen2 VM with same settings, and attach existing VHDX file(s)
# Add DVD drive and make it first device, attach same ISO image you used to install this OS
# boot ISO while in Gen2/EFI VM, and in grub pick "Rescue" option, then follow the questions
# when it asks, pick a correct root filesystem (something like /dev/ubuntu-vg/root if you have LVM
# it will ask to mount /boot ; skip that
# once you are in shell you have "#" at bottom of screen, start bash to make your workflow easier
    bash

# Prepare partitions and mounts
# if you mounted /boot just unmount it again
    umount /boot
# check which partition was your boot
    fdisk -l
    cat /etc/fstab
# format old boot partition to FAT
    mkfs -t vfat /dev/sda1
# mount boot back
    mount /dev/sda1 /boot/

# copy files from old (BIOS) /boot2 backup, one you made before formatting
# copy backup files back to /boot
    cp -r /boot2/* /boot/
# install EFI grub if you did not earlier, otherwise it should be available since you are effectively in chroot of your original OS installation; you may need to setup /etc/resolv.conf temporarily
#    apt-get install -y grub-efi
    grub-install --force --target=x86_64-efi --boot-directory=/boot --efi-directory=/boot /dev/sda
# should reply:     - No error reported
# edit fstab
    nano /mnt/root/etc/fstab
# change UUIDs to what says in comments above them like "was on .... during installation".. so use that "....", for example
# also /boot needs to be changed from ext2/ext4/whatever to "vfat", and added umask, so like this
    /dev/mapper/ubuntu--vg-root / ext4 errors=remount-ro 0 1
    /dev/sda1 /boot vfat umask=0077 0 2
# keep other entries (eg swap) as is, or if you know what you're doing change them in similar way, or find new UUIDs, etc.

# now we can shut down VM, unmount DVD/ISO, and get it ready for normal boot
    exit / exit / poweroff
# eject media + enter
# turn off / shut down VM

# start again by doing: Connect, Start

# you may get "press any key to continue" .. just press any key

# after successful reboot, install and update grub, to have correct and fresh system with current mounts etc
# if you had to manually fix boot, first fix whatever was wrong (like fstab, or whatever), then do this
    sudo su
    grub-install /dev/sda --efi-directory=/boot
    update-grub
    reboot

On future kernel update you may get prompted by grub installer if you want to keep your config or few other options. Pick first one - installing "package maintainers" version (assuming you did not do manual changes to grub.cfg, which would be unusual in a server VM). Since Ubuntu is package maintainer, it should set all the "safe defaults" properly for you.

If you get stuck, check the Ubuntu 20.04 procedure and tips at its end.

CentOS 7

Next we do the CentOS 7 steps, it is quite similar, so please read Ubuntu 20.04 instructions first to understand general idea and warnings.

# as with Ubuntu, create Gen1 VM, with default settings, or use one you already have, just make sure to backup/export VM to prevent any data loss

# boot your Gen 1 VM
# again we either do this as root or sudo into root
    sudo su

# while still in Gen1, install the following in your OS
    yum install -y grub2-efi
    yum install -y grub2-efi-modules
    yum install -y efibootmgr
# following are optional, but won't hurt, and can be handy later
    yum install -y shim
    yum install -y dosfstools
    yum install -y nano

# backup content of /boot to temporary /boot2 folder
    mkdir -p /boot2
    cp -r /boot/* /boot2/
    ll /boot2/
# then shut down and export/backup your VM if you haven't already
    poweroff

# after export/backup, again remove your Gen1 VM, but keep VHDX file(s)
# then recreate it as new VM with Gen2 setup, and with DVD with installer ISO attached, boot to ISO

# once booted, pick "Troubleshooting" then "Rescue a CentOS system"
# wizard/guide will ask you something, pick "1" to "Continue"
# this will mount your current OS
# once you are in shell, start bash to make your workflow easier
bash

# Prepare partitions and mounts
# check for your old boot partition
    fdisk -l /dev/sda*
# format old boot partition, make sure you change to sdXn for your real situation!
    umount /dev/sda1
    mkfs -t vfat /dev/sda1
# chroot to your real OS instance; didn't have to do this in Ubuntu, couldn't find a way around it
# as in CentOS rescue mode yum was non operational (in my testing)
    chroot /mnt/sysimage
# mount the boot partition
    mount /dev/sda1 /boot
# no need to mount root as it was already mounted at /mnt/sysimage/ by Rescue guide/wizard

# copy files from old (BIOS) /boot2 backup, one you made before formatting, to /boot
    cp -r /boot2/* /boot/
# install EFI grub; if you get errors then you didn't install packages with yum as instructed earlier
    grub2-install --force --target=x86_64-efi --boot-directory=/boot --efi-directory=/boot /dev/sda
        # should output : No error reported
# you HAVE TO regenerate grub config after reinstall, or it won't pick up everything
    grub2-mkconfig -o /etc/grub2.cfg
# edit fstab
    vi /etc/fstab
        # OR if you installed  optional packages
    nano /etc/fstab
# updated fstab content is something like this
    /dev/mapper/centos-root / xfs defaults 0 0
    /dev/sda1 /boot vfat umask=0077 0 0
#   /swap... keep as is ; same with other partitions if you have them
# instructions for vim: 
#    i to edit ; esc to end editing mode ; :x to save ; :q to quit ; :q! to quit without changes
# exit chroot environment
    exit
# then shut down VM
    poweroff
# eject media
# turn off, then you can start it again
#    Connect, Start

# while booting it should provide grub menu, let it boot, then it will (re)boot again, let it do it again to get to normal login prompt, this is expected because SELinux is doing some conversion on first boot and seems to need a reboot after that!

# after reboot, install and update grub, to have correct and fresh system with current mounts etc
# if you had to manually fix boot, first fix whatever was wrong (like fstab, or whatever), then do this
grub2-install /dev/sda --efi-directory=/boot
grub2-mkconfig -o /etc/grub2.cfg
reboot

This should give you working Gen2 / EFI CentOS 7 VM. As with Ubuntu, I did kernel update after this procedure, and while I did manually run grub2-mkconfig command, everything else worked fine, and on reboot I could pick a new entry in grub and booted correctly to new kernel.

Again, if something went wrong on your first try, don't worry, you can fix it by doing something like this:

    # if you reboot to grub rescue prompt, you can still fix everything, no need to return to original export/backup files !
    # enter these commands
    # try this first
        configfile (hd0,msdos1)/grub2/grub.cfg

    # if above didn't start proper boot, try manually like this, just enter your current kernel version, use tab to autocomplete
        set root=(hd0,msdos1)
        linux /vmlinux-(hit the Tab)
        linux /vmlinuz-(your.version) ro root=/dev/mapper/centos-root
        initrd /initramfs-(your.version).img
        boot
    # if needed repeat this 2 times until you get to normal login prompt because of SELinux is doing some conversion on first boot!

After this you should end up in your normal OS instance, just sudo as root again, and repeat the grub2-install and grub2-mkconfig as explained above.

Wrap-up

I assume some parts could be done better/differently, as I've found a lot of variations to the topic, as can be seen from Ubuntu vs CentOS already..

If anyone stumbles upon even better way, let me know in comments below. I have a whole environment of production servers of assorted Ubuntu/CentOS/Debian VMs to convert from Gen1 to Gen2 in 2021. so any additions are welcome!

Few links I used in my research:

https://unix.stackexchange.com/questions/418401/grub-error-you-need-to-load-kernel-first

https://help.ubuntu.com/community/Installation/UEFI-and-BIOS/stable-alternative#Create_boot-loading_systems_for_external_drives

How do I convert my linux disk from MBR to GPT with UEFI?

https://askubuntu.com/questions/831216/how-can-i-reinstall-grub-to-the-efi-partition/1203713#1203713

https://unix.stackexchange.com/questions/152222/equivalent-of-update-grub-for-rhel-fedora-centos-systems

Share:
5,442

Related videos on Youtube

Jcl
Author by

Jcl

Updated on September 18, 2022

Comments

  • Jcl
    Jcl over 1 year

    Is there any way I can easily (and without reinstalling) convert a Linux Hyper-V (gen 1) VM to a gen2 one?

    I know the Convert-VMGeneration cmdlet for PowerShell (this one: https://code.msdn.microsoft.com/windowsdesktop/Convert-VMGeneration-81ddafa2) but that won't work with Linux VMs.

    I'm having some problems running on HyperV (machine stops responding for a while, etc.), that for the most part I've seen are greatly improved on Gen2 (we follow all Microsoft recommended practices for running Linux on Hyper-V but it's still not there, at least on Gen1).

    The original VM was running on a Windows Server 2008 host. We have upgraded to a 2012 R2 host and can run Gen2 now, but every source I've found says you have to reinstall linux for it (I haven't been able to figure out why, but I'm sure there should be a reason).

    The installing and migration for this particular server (it's a Gitlab server running on Ubuntu 14.04) is pretty cumbersome and we'd prefer not to reinstall and migrate if at all possible.

    • Michael Hampton
      Michael Hampton over 9 years
      I don't see any reason why this wouldn't work, provided the Linux guest is up to date and secure boot is disabled for the VM. Only a few Linux distributions support secure boot so far.
    • Jcl
      Jcl over 9 years
      It just won't boot if I just attach the gen 1 vdmx disks to a new gen 2 machine. I don't really know why or where to look. That happens with Windows VM too unless you run some conversion script (which admittedly I haven't researched further on what it does) on all of the VDMX files. Those conversion scripts fail on Linux hard disks (they look for the partitions and do "something" on Windows VMs, they don't seem to find any using ext partitions)
    • Jcl
      Jcl over 9 years
      From the FAQ (technet.microsoft.com/en-us/library/dn282285.aspx): Can a VHDX file that was converted from a VHD file be used to boot a generation 2 virtual machine? No. A clean installation of the operating system is required.
    • Michael Hampton
      Michael Hampton over 9 years
      I think this covers it. serverfault.com/q/629245/126632 It's not trivial to convert a drive formatted for MBR to GPT/UEFI boot, and many Linux distributions need to be installed fresh in order to boot from UEFI.
    • Jcl
      Jcl over 9 years
      @MichaelHampton yeah, was about to post that precise link... but since there IS a tool for Windows VMs I was wondering if such a tool for Linux VMs would exist too (I haven't found it, that's for sure)
    • Jcl
      Jcl over 9 years
      Ah, well, I'll get one day off my actual duty to try to reinstall and migrate from the old machine, and see if Gen2 actually works better with Linux :-) Thanks
    • Admin
      Admin over 8 years
      It's probably too late but I wrote a procedure to do exactly what you need to do without reinstalling the VM and without any dataloss. Check it out here: fercasas.com/2016/01/04/…
    • Jcl
      Jcl over 8 years
      @fc7 yeah, not using my own gitlab anymore (I'm using their online service), but it might be handy. Make it an answer and I'll accept it
    • LuxZg
      LuxZg over 3 years
      Thanks for accepting my answer. As a side info, answer now contains procedures for Ubuntu 20.04, Ubuntu 16.04 and CentOS 7. With slight modifications it should fit any distro using grub bootloader, which would be pretty much all of them these days. Cheers!
  • Michael Hampton
    Michael Hampton over 3 years
    This generally should work, but it puts Linux kernels and such in the (newly created) EFI system partition, which has security implications. The access to /boot needs to be carefully controlled and allowed to root only.
  • LuxZg
    LuxZg over 3 years
    @Michael - I did my best to leave system as close to state as it was with original BIOS install. I literally installed fresh Ubuntu 20.04 with default settings, and it created sda1/2/3 with sda2 being formatted with ext4 and mounted as /boot and containing vmlinuz, initrd.img, related symlinks, and grub. After conversion to EFI, files are still in same place, same mount, just formatted to FAT (vfat), and new folder EFI with EFI bootloader files, but linked to same vmlinuz/initrd, and same grub location (folder with config and modules). Rough total change is ext to vfat, and couple new files.
  • Michael Hampton
    Michael Hampton over 3 years
    And the permissions?
  • LuxZg
    LuxZg over 3 years
    I wasn't paying attention, it was 1AM after all, and yes, since EFI is now on FAT partition it will inherently ignore permissions. I guess mounting /boot as read only would be one way, but admin would need to remember to change to read-write on apt/yum update (if kernel or grub is being updated). Or using masks. In my case these are servers, and only admins have access, so root or no root didn't hit me till your comment (anyone with access has sudo rights anyway). But you are correct. Seems I'll need to install a fresh Gen2/EFI OS to see how default EFI install handles this problem.
  • LuxZg
    LuxZg over 3 years
    Btw added CentOS 7 instructions as well. And at least answer to the questions isn't "can't be done!" anymore :) and it's relatively quick as this can be done on large VMs without touching root and data partitions, no file copies really needed (doing backup and export is just recommendation), so it's pretty much possible as a quick in-place upgrade. Takes few minutes (per VM), and opens up a whole world of possibilities, so I'll take it. If I find a good answer to permissions issue I'll update the answer, thanks!
  • Michael Hampton
    Michael Hampton over 3 years
    Pretty much like this: UUID=66F3-D766 /boot/efi vfat umask=0077 0 2 which results in drwx------.
  • LuxZg
    LuxZg over 3 years
    Thanks! I'll still re-check tomorrow, do a fresh EFI install of both CentOS 7 and Ubuntu 20.04, then check their original fstab, and edit the answer as per your recommendation after that. Then I still need to go through 16.04 at a minimum and see if anything changed from version to version before I start doing this on my own production VMs...
  • LuxZg
    LuxZg over 3 years
    Both CentOS 7 and Ubuntu 16.04 use umask=0077 so I have edited fstab part of the answer with that, as Michael suggested. Thank you! Likewise, I see that both use 2 partitions, one for /boot with kernel/initrd/.. formatted with ext2, and one for /boot/efi that's vfat formatted and contains EFI and bootloader and couple grub related files. In theory, one could remove old /boot partition completely and make 2 partitions in its place, ext2 & vfat, but I will not spend my time on replicating original setup, as I don't see too much point. If someone needs 100% same setup, just modify my steps.
  • LuxZg
    LuxZg over 3 years
    ubuntu 16.04 added
  • ionescu77
    ionescu77 over 2 years
    Kudos, thank for this, tested with centos 7.9. Good mentioning to change shell to bash for centos 7 also (I noticed in the ubuntu README), otherwise mkfs is not found. I have a Gen2 HyperV VM now. I was wondering if the new Gen2 drivers get automatically picked-up