Bridging Virtual Networking into Real LAN on a OpenNebula Cluster
Solution 1
I ran into this same problem and it was the VM that I setup using ubuntu vmbuilder.
Check out this intro that includes a complete vm. I'm certain that this should work for you.
They provide an init script, amongst other things, that set up certain networking parameters at boot so this is likely at the heart of the issue. They more fully explain this here with contextualization.
General Context Method
So it turns out installing the contextualization files in the VM is a rather simple issue.
If you are using vmbuilder to make your VM you can use a post install hook (and I'm sure other build methods also have various post install hooks too).
Create the copy file (hostfile to guestfile, make sure it's single space separated?)
copy.cfg
<path>/rc.context /etc/init.d/vmcontext
<path>/postinstall.sh /tmp/postinstall.sh
Create the post install hook
postinstall.sh
#!/bin/bash
# create mount point for context image
mkdir /mnt/context
# setup vmcontext at runlevel 2 service level 1
ln -s /etc/init.d/vmcontext /etc/rc2.d/S01vmcontext
Create script to chmod to vm guest
chvm.sh
#!/bin/bash
# vmbuilder passes vmguest root as $1
chroot $1 /tmp/postinstall.sh
Finally, edit your vmbuilder conf file for the VM
yourvm.cfg
...
copy = <full_path>/copy.cfg
execscript = <full_path>/chvm.sh
...
Then construct with vmbuilder
sudo vmbuilder kvm ubuntu -c vmbuilder.cfg
Add a nebula based vnc
Include something like this in your context
GRAPHICS = [
LISTEN = 0.0.0.0,
PORT = 5900,
TYPE = vnc ]
Then ssh tunnel to a computer that's on the guest machines network
ssh -L 5900:127.0.0.1:5900 yourserver.com
And open a vnc client at 127.0.0.1 on your local computer.
Thoughts
Nebula can't force kvm/libvirt to run your drives on hd*/sh* so you'll need to play around with where they wind up (and edit the rc file to reflect this). E.g. with my Ubuntu setup the qcow2 image gets /dev/sda and the context image gets /dev/sr0.
I also had an issue where either kvm or nebula couldn't guess the format of my .qcow2 image. Hence, in DISK I had to include DRIVER=qcow2. This same problem occurs for processor architecture hence in OS I had to include ARCH=x86_64 (since I was running an amd64 guest).
Good luck
Solution 2
When you're first getting setup with OpenNebula, the VNC-based console is a lifesaver. I highly recommend setting that up before you try diagnosing anything else, because it's simply so useful having the ability to look at the state of your VM even when its networking configuration is broken. Seriously - do not pass go, do not collect $200 - go get the noVNC component working and then go back to your other OpenNebula setup work.
As to your actual question - the problem is almost undoubtedly that you're using a stock OS image without any network contextualization scripts. OpenNebula, by design, doesn't actually manage IP addresses even though it maintains a pool of them and "leases" them out. What it's really doing is assigning a MAC address to the virtual ethernet interface that has the desired IP address encoded in the last 4 bytes of the MAC address, and it's up to the OS to recognize that and assign an IP appropriately.
OpenNebula has some documentation about the contextualization, but it's honestly not that good. I found it was easier to just read the source to the sample vmcontext.sh script and setup my VMs to use that mechanism by running vmcontext at startup in the appropriate point in the boot process.
Related videos on Youtube
user101012
Updated on September 18, 2022Comments
-
user101012 over 1 year
I'm running Open Nebula with 1 Cluster Controller and 3 Nodes.
I registered the nodes at the front-end controller and I can start an Ubuntu virtual machine on one of the nodes.
However from my network I cannot ping the virtual machine. I am not quite sure if I have set up the virtual machine correctly.
The Nodes all have a
br0
interfaces which is bridged witheth0
. The IP Address is in the 192.168.1.x range.The Template file I used for the vmnet is:
NAME = "VM LAN" TYPE = RANGED BRIDGE = br0 # Replace br0 with the bridge interface from the cluster nodes NETWORK_ADDRESS = 192.168.1.128 # Replace with corresponding IP address NETWORK_SIZE = 126 NETMASK = 255.255.255.0 GATEWAY = 192.168.1.1 NS = 192.168.1.1
However, I cannot reach any of the virtual machines even though sunstone says that the virtual machine is running and
onevm list
also states that the vm is running.It might be helpful to know that we are using KVM as a hypervisor and I am not quite sure if the virbr0 interface which was automatically created when installing KVM might be a problem.
-
bias over 12 yearsI have this exact same issue - all of the intro's on the subject are so minimal that even a small setup like this isn't doable! Aarg!
-