How to check on which drives grub2 has actually installed an MBR?

39,667

Solution 1

The MBR is 512 bytes, so a quick way to see if GRUB is there...

dd if=/dev/sda bs=512 count=1 | xxd

That dumps the MBR, I see "GRUB" in mine at byte 0x17F = 383.

dd if=/dev/sda bs=1 count=4 skip=383

When I do that, it prints 'GRUB' followed by the dd output.

You can wrap that in a bash for loop or something to go across more drives. if you don't want to do it manually.

Edit: Over ten years later, I just got a notification saying this was upvoted again. Great! However, I honestly don't know if GRUB installs itself in the "protective MBR" of modern GPT-partitioned drives or not, so take this answer as only a potential one with more modern UEFI booting.

Solution 2

There are several steps in the boot process (I'm describing a traditional PC BIOS):

  1. The BIOS reads the first sector (512 bytes) of the boot disk.
  2. The code in this first sector reads further data and code at a fixed location through the BIOS interface. This BIOS interface only exposes two hard disks: disk 0 is wherever the first sector was read from, and disk 1 is another disk which isn't easily predictable if you have more than two. The boot sector contains a byte that indicates which hard disk the further data is on; this is the disk containing /boot/grub.
  3. The code loaded at the previous stage understands partitions, filesystems and other high-level notions. The data includes a filesystem location (i.e. a string like (hd0)/boot/grub) that determines where to find grub.cfg and further Grub modules.
  4. grub.cfg is executed, typically to show a menu and boot an OS.

The boot sector is generated by grub-setup, normally invoked through grub-install. The boot sector ends up on whatever disk you specified (in Linux syntax) on the grub-install or grub-setup command line. You can check that you have a boot sector on a disk by running file -s /dev/sda. Since you're adding a new disk and want to boot from it, you need to run grub-install on the new disk. Running grub-install multiple times on the same disk is harmless.

The difficult part is in step 2 above. If at all possible, put Grub (i.e. the /boot/grub directory) on the BIOS boot disk (or, approaching this from the other direction, tell your BIOS to boot from the disk where /boot/grub is). This is where device.map comes into play. Make sure that (hd0) is mapped to the disk that contains /boot/grub, then run grub-install on that disk.

If your two disks are in a software RAID-1 configuration, you'll have identical boot sectors. This is the desirable behavior: if the one disk that is the BIOS boot disk fails, booting from the other one will just work (since they contain the same bytes at the same relevant locations). If you've only mirrored certain partitions, then installing a boot sector only affects one of the disks. You should run grub-install again on the second disk, after changing device.map to associate (hd0) with the disk containing the second mirrored copy of /boot/grub.

Step 3 is pretty complex, but usually works out of the box. At step 4, Grub locates filesystems by UUID or looks for named files, so you no longer need to worry about the various ways to designate disks.

Share:
39,667

Related videos on Youtube

timday
Author by

timday

Updated on September 18, 2022

Comments

  • timday
    timday over 1 year

    I'm on a Debian/Squeeze system (with a history going back to at least Woody) which was upgraded to grub2 as part of the Squeeze upgrade. All works well, but I'm about to mess with the disk configuration.

    Currently the machine runs off 2 80GB drives with RAID1-ed /, /home and /boot partitions (there's another pair of drives with a RAID1-ed "/data" and a couple of swaps, in case anyone was wondering where the swap is, but I'm not touching those).

    I've added 2 130GB SSDs, partitioned them to be at least as large as the partitions on the 80GB drives, and intend to switch over to the new SSD drives by growing the RAID1s to include them, waiting for sync, then removing the old drives from the arrays so just the SSDs are left (and then growing the filesystems). But mdadm/ext3 wrangling is not what this question is about...

    That'll leave me with 2 obsolete 80GB (IDE) drives which I want to remove from the machine. My worry is that removing them will take some crucial MBR with them. How do I ensure the machine remains bootable ?

    More specifically:

    • When I did the Squeeze upgrade, I remember there was some choice presented about which drives grub2 should install to (I went with the default, which was all drives). The SSDs weren't in the machine at the time though; how can I rerun this to get grub to install on the SSD MBRs ? (I'm guessing it's a dpkg-reconfigure of some package).

    • How can I find which drives grub2 thinks it's installed on ? Good grief there are almost 200 files under /boot/grub/ these days! Where to look ? Also, it seems slightly odd that /boot/grub/device.map.auto only lists 3 drives currently (2 of the 80GBs but only one of the other drive pair, and none of the SSDs). How do I get that up to date ? (Update: that was a red herring; device.map.auto seems to be a relic from years ago; device.map looked sensible on an update by grub-mkdevicemap. Think my paranoia in this area originates from an old mobo's BIOS which would reorder the device order seen by GRUB on a whim).

    Outcome: all went well and I now have the two old 80GB IDE drives out of the box, and a snappy and quick booting system running off RAID1-ed SSDs with all filesystems resized up to their new partition sizes. The other "missing piece of the Grub puzzle" I was looking for was dpkg-reconfigure grub-pc which prompts for which disks to maintain a MBR on. Aaron's answer actually did most to reassure me that this was working as expected, hence accepting that answer.