Kernel Panic - not syncing: VFS: Unable to mount root fs after new kernel compile
Solution 1
You forgot to build your initrd that goes with the kernel. Run update-initramfs -c -k kernelversion
and then update-grub
to find it and add it to the grub menu.
Solution 2
Did you built in all the drivers required to mount the root partition? I mean, the I/O controller driver, the filesystem driver, and so on?
The error means what it means, the kernel is unable to mount the root filesystem.
I don't recall what exactly should unknown block
be, but I guess that means it is lacking the I/O controller driver.
Please note that the drivers have to be built into the kernel, modules won't work (as you need to mount the filesystem to get access to the modules).
Related videos on Youtube
fromClouds
Updated on September 18, 2022Comments
-
fromClouds over 1 year
So I've been at this for a while and have been poking around for an answer for a few days, and figure it's about time to ask for help. I am running Ubuntu 10.10 in VMWare Fusion, and have downloaded a copy of the 3.2 kernel and built it with all default settings. When I try to boot into the new kernel after a call to make install, I get the following message:
[ 1.581916] Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0) [ 1.582260] Pid: 1, comm: swapper/0 Not tainted 3.2.4 #1 [ 1.582444] Call Trace: [ 1.582552] [<ffffffff815e7447>] panic+0x91/0x1a7 [ 1.582666] [<ffffffff815e75c5>] ? printk+0x68/0x6b [ 1.582799] [<ffffffff81ad2152>] mount_block_root+0x1ea/0x29e [ 1.582929] [<ffffffff81ad225c>] mount_root+0x56/0x5a [ 1.583047] [<ffffffff81ad23d0>] prepare_namespace+0x170/0x1a9 [ 1.583178] [<ffffffff81ad16f7>] kernel_init+0x144/0x153 [ 1.583304] [<ffffffff815f45f4>] kernel_thread_helper+0x4/0x10 [ 1.583436] [<ffffffff81ad15b3>] ? parse_early_options+0x20/0x20 [ 1.583570] [<ffffffff815f45f0>] ? gs_change+0x13/0x13
Which used to appear on every reboot. I found that if I changed the VM's harddrive type, I could get GRUB to boot at least, but the message above comes up if I try to load the newly compiled kernel. The old kernel works as before. I have checked and I have compiled in support for ext4, which is the fs my root is running. I have also tried generating an initrd file with a call to "sudo update-initramfs -c -k 3.2.4", but to no avail.
The compilation, I think, was pretty standard:
make menuconfig make make modules_install make install update-grub reboot
Were the general steps. In terms of options, I basically took the default on everything. In case it's pertinent, my fstab looks like this:
proc /proc proc nodev,noexec,nosuid 0 0 #UUID=c75eddd9-f4fa-49be-927b-8c2da7074135 / ext4 errors=remount-ro 0 1 /dev/sda1 / ext4 defaults 0 1 #UUID=5bc6915e-fdfa-479a-885f-ea03cb14f9cd none swap sw 0 0 /dev/sda5 none swap sw 0 0 /dev/fd0 /media/floppy0 auto rw,user,noauto,exec,utf8 0 0
Where I've tried it with both UUID's and /dev/sd* notation. Any help or advice would be much appreciated, as it's gotten quite frustrating.
Thank you.
-
fromClouds about 12 yearsWell that was simple enough. This was all it took. I actually ran update-initramfs -c -k previously, but didn't think to re-update grub afterward. Thanks much.
-
Luka about 7 yearsSorry to comment for such an old post, but I have/had same problem. I just updated my fresh install of centos with yum update and then i can't boot it... When i click something before automatic start i have 2 centos to choose, i am assuming it's 2 centos kernels... and it works with second option. what should i do? thanks. Update grub maybe?