How to install Windows and linux on separate drives so that their booting is independent

13,587

Solution 1

I think UEFI can automatically handle that. At least it should find both EFI Partitions, just like it will find any EFI Partition on a DVD that is inserted or on USB Sticks.
You can configure the order in UEFI manually, or you can press a button on startup in most cases that let's you choose what to boot.
Yup, distros will allow you to do that on install time, but depending on what you install you might have to do more or less work. Antergos for example specifically asks for the /boot/efi partition and you can create one if it does not exist - just google it if you can't figure it out in install times, this is basic stuff that should be mentioned somewhere for every distro.

I don't know about BIOS/MBR, but I think it would be possible even with that instead of UEFI.

Edit:
There shouldn't be any need to connect them one after the other. It should work fine with both connected from the beginning.

Solution 2

Results

It works.

Each OS -- Windows 7 & OpenSuse Tumbleweed -- boots and works, with

  • both hard drives online
  • the other hard drive offline

What I did

1) Disabled Hard Drive 2 in the UEFI and installed Windows 7 on Hard Drive 1. It shouldn't be necessary to disable other hard drives but Windows likes to plant its flag in whatever it can find, so just thought I would avoid that. HD1 is formatted as GPT.

2) Enabled Hard Drive 2. Now both drives are online. Installed OpenSuse on Drive 2. HD2 is formatted as GPT. All partitions created for Linux in the Suse installer are on this drive viz. EFI, Swap, OS/root, and the user partitions. All partitions are mounted via UUID (important). No alteration is made on Drive 1. Boot loader chosen is Grub2 on EFI.

3) Tested each OS by booting with both drives enabled, and with the other drive disabled. Works fine. One small hitch is that if the Windows drive is disabled, Tumbleweed takes a bit longer to init. This is because there's a startup job related to the swap partition which times out. Even though the partition itself is mounted via UUID, some systemd job references the partition via the device address. With both drives enabled, the swap device address is /dev/sdb2, and with just the linux drive enabled, it's /dev/sda2. Doesn't seem to affect operation after booting, other than prolonging init time. Not an issue in normal use, as both drives are online. Will look into it.

Solution 3

What you're proposing was somewhat common in the days of BIOS-only booting, and worked reasonably well in that context. There is a complication in EFI-mode booting, though: Under EFI, boot loaders are stored in the EFI System Partition (ESP) using semi-arbitrary filenames. To tell the computer what boot loader to use, boot loader filenames (including identification of the partition(s) on which they reside) are stored in NVRAM. The complication is that many EFIs will automatically delete NVRAM entries that point to files that don't exist. Thus, once you remove a disk from the computer, the EFI may delete references to its boot loader(s), and when you plug that disk back in, it will no longer be bootable -- at least, not without some way to restore its NVRAM entry.

I'd like to emphasize that not all EFIs do this; some leave invalid NVRAM entries in place, which means that they'll continue to work after you remove and then restore a hard disk. I'm not sure of the percentage of computers that remove NVRAM entries; you'll just have to check this for yourself.

One possible way around this issue is to make use of the "fallback filename," which is EFI/BOOT/bootx64.efi (for x86-64/AMD64/x64 systems) on the ESP. The boot loader with this filename is launched if the firmware can't find any other valid boot loaders. Thus, you could copy or rename the OS's normal boot loader to this name to make it work; or you could put a boot manager in that place. (A boot manager lets you pick which OS to boot; a boot loader loads the OS kernel into memory. Some programs, like GRUB, do both things.) Something like my rEFInd boot manager might be helpful for this. In theory, putting rEFInd in the fallback position on both disks and clearing the NVRAM entries for Windows and Ubuntu should work fairly well, but there is one complication: Many EFIs treat the Windows boot loader (EFI/Microsoft/Boot/bootmgfw.efi) as if it were another fallback filename. It may be promoted over the regular fallback filename, so the system may boot to Windows if the Windows disk is installed.

Note that, if the computer removes invalid NVRAM entries and so you rely on the fallback filename, booting may become unpredictable. That is, the computer might go to Windows one time and Linux another time, depending on what it had last booted, what disk(s) had been plugged in the last time it booted, etc. You should be able to use the computer's built-in boot manager to force a boot to a specific OS, but these tools are often awkward and are sometimes unreliable.

All of this makes the answer to the question of why you want to be able to remove disks important. Under EFI, leaving both your disks plugged in at all times is likely to be simpler than swapping them out, as you say you want to do. If you want to reduce the odds of one OS trashing the other's files, you might be better off with good backups and good planning of which partition(s) each OS is permitted to read and write.

Depending on your needs, an in-between option is to leave one disk permanently installed and place both OSes' boot loaders on that disk. You could then unplug the second disk on an as-needed basis. Be aware, though, that many distributions configure GRUB to rely on files in the Linux /boot directory, so if you want to make the Linux disk unpluggable, you may need to put a /boot partition on the permanently-installed disk. Alternatively, you could become an expert in GRUB so as to keep its configuration and support files on the ESP; or you could use something other than GRUB. As an extreme-case alternative, you could have a very small disk (even a USB flash drive) with an ESP and, if necessary, a /boot partition, and use separate disks for the bulk of each OS's installation.

Another option is to rely on the Compatibility Support Module (CSM), which provides support for BIOS-mode (aka "legacy") booting. You could install both Windows and Linux in BIOS mode and boot the computer much like you'd have done ten years ago. Controlling the CSM requires some expertise, though; it's easy to accidentally boot in EFI mode rather than BIOS mode (or vice-versa), and if you're unfamiliar with it, you might not even realize what you've done until you've fully installed the OS and it ends up not booting the way you expected. See this page of mine for more on this subject.

Share:
13,587

Related videos on Youtube

Gyan
Author by

Gyan

An FFmpeg maintainer. Windows builds available from https://www.gyan.dev/ffmpeg/builds/ Available for ffmpeg consulting (ffmpeg at gyan dot dev) https://ffmpeg.org/consulting.html

Updated on September 18, 2022

Comments

  • Gyan
    Gyan over 1 year

    I've a system with a UEFI firmware, to which I'll be adding two drives. I wish to install Windows 7 on one and a linux distro on another. I would like it set up such that if one drive were to be offline, I could reliably boot and operate the other OS, save for complaints about missing data partitions.

    My plan is to install Windows first with only one drive connected. Partition the drive as GPT and install. Windows will create an EFI partition and add its UEFI boot entry.

    Then connect the other drive -- so both are online -- and tell the linux installer to create its EFI partition on the 2nd drive and install its bootloader there. I'm deciding between OpenSuse Tumbleweed and one of the Arch-based distros. Will they allow me to do this at install time?

    So the UEFI boot entry for Windows points to Drive1\EFI and that for linux, points to Drive2\EFI. These entries should identify the partitions via UUID. I'll use the UEFI boot menu at startup to choose OS.

    Is my plan viable? In linux, will the drive device address change (sdb --> sda) if only one drive is present wreck things?

    Can this be carried out via BIOS/MBR modes? If this can't be done at all, why not?

    Thanks.

    P.S. I scanned most of the related questions displayed, but none seemed to have had the same requirements or circumstances. If there is one, with an answer, do let me know.

  • Gyan
    Gyan about 7 years
    I meant drive as shorthand for 'hard disk drive'. A SSD is a drive which has no disk. That nitpick aside, thanks. Do you know if linux will init properly if the other drive is removed. Will the device address change, and will that be a problem?
  • Yorkziea
    Yorkziea about 7 years
    Yes there is a difference between drive, driver, hard disk and device ;) You should get familiar with UEFI boot sequence (exactly what happens in what order) - UEFI documents with different grade of complexity can be found on the internet. Look for "boot order", "boot sequence".
  • Yorkziea
    Yorkziea about 7 years
    I am not sure if disk(device) number is part of boot entry. Should be disk GUID if I would write the spec ;) Have to check what exactly is specified in the docs on UEFI website.
  • Gyan
    Gyan about 7 years
    Thanks for the thought-provoking answer. I'll chew on it tomorrow. But my reason for the setup isn't portability. In the last month or two, I've had two computers lose their drives to the dreaded clicking sound. Both drives were old and both had most of their important data backed up. But those computers are deadweight till the replacement arrives.
  • Gyan
    Gyan about 7 years
    The 2nd time happened a couple of days ago. So I want independently bootable OS on each drive with core working data backed up in (near) realtime to the other drive. So when one goes, the comp's still functional and the important data is safe and usable. RAID is too expensive as much of the data doesn't need realtime redundancy and is regularly backed up externally. I'm not wedded to the scheme I described in the Q but will welcome any that achieves the same end.
  • Rod Smith
    Rod Smith about 7 years
    In that case, I recommend keeping both hard disks installed at all times. You can have separate ESPs, one on each disk, holding the boot loader for the associated OS. If one disk fails completely, the other OS will remain bootable, although you might need to use the computer's built-in boot manager to select the boot loader, particularly if you don't immediately remove the failed disk.
  • Gyan
    Gyan about 7 years
    Yes, that's the plan :) Is this possible using BIOS/CSM boot on MBR partitions?
  • Rod Smith
    Rod Smith about 7 years
    Yes, if both disks remain installed at all time and OSes and their boot loaders are isolated to those disks, it will work in a very similar way for either BIOS-mode or EFI-mode booting. The differences crop up when you remove one disk; that may (depending on your EFI) trigger removal of boot loader entries for boot loaders stored on that disk under EFI, whereas with BIOS, the entry would be added back when you put the disk back. Knowing how to use efibootmgr (in Linux) or EasyUEFI (in Windows) can help you recover from these issues.