With Ubuntu 16.04 and KVM, I can't get vms to network without using NAT
Solution 1
Solved.
The problem was default settings in the br_netfilter module, which send bridged packets into iptables. The libvirt docs do mention it on their networking page, but most tutorials I'd been following did not cover it.
For some reason iptables was eating those packets (maybe something docker added?), but sending those packets to iptables is apparently inefficient regardless, so the fix is to bypass iptables by changing those settings.
Note the method I outline here is actually an example in the sysctl.d manpage.
Create a /etc/udev/rules.d/99-bridge.rules
, and insert this line:
ACTION=="add", SUBSYSTEM=="module", KERNEL=="br_netfilter", RUN+="/lib/systemd/systemd-sysctl --prefix=/net/bridge"
Then, create a /etc/sysctl.d/bridge.conf
, and insert these three lines:
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0
Then I just had to revert back to my original bridge setup, which involves a /etc/network/interfaces
that looks like this:
source /etc/network/interfaces.d/*
auto lo
iface lo inet loopback
auto enp2s0
iface enp2s0 inet manual
auto br0
iface br0 inet dhcp
bridge_ports enp2s0
bridge_stp off
bridge_fd 0
bridge_maxwait 0
And a virsh network interface definition that looks like this:
<interface type='bridge'>
<mac address='52:54:00:37:e1:55'/>
<source bridge='br0'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
With all that in place, my VM boots, it gets an IP, and I can freely talk to my host, the local network, and the internet.
Solution 2
I got it to work, using direct instead of bridge:
Using this /etc/network/interfaces
source /etc/network/interfaces.d/*
auto lo
iface lo inet loopback
auto enp2s0
iface enp2s0 inet dhcp
auto enp3s0
iface enp3s0 inet manual
And this virsh setup:
<interface type='direct'>
<mac address='52:54:00:37:e1:55'/>
<source dev='enp3s0' mode='vepa'/>
<target dev='macvtap0'/>
<model type='rtl8139'/>
<alias name='net0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
My local network sees the 52:54:00:37:e1:55 mac address, and the DHCP server gives it an IP, and I can ssh in to this machine via that IP, so everything seems good. I can run a second VM concurrently, its MAC also gets an IP, so I seem to have the solution I wanted.
Maybe next I'll try doing all this on the original ethernet port. I'm also curious what bridging really is, and what it solves that direct does not, if anyone reading this has an answer. Thanks!
UPDATE: The problem this solution has is that any VMs that are sharing a single physical interface fail to talk to each other. Same is true of the host, when the host and VMs share the same physical interface.
I suspect that this is the very problem that bridging is supposed to solve, but I could really use some guidance from someone with some experience with this stuff.
Solution 3
I spent a week wrestling with this and ... it was Docker. When Docker is running it places runtime rules in netfilter, with the intention of insulating Docker containers. These have the side effect of blocking traffic to bridged VMs.
I stopped and disabled Docker via systemctl, rebooted host system and everything worked.
I guess I'll install Docker in a bridged VM, where it can be king of itself.
Related videos on Youtube
brakeley
Updated on September 18, 2022Comments
-
brakeley over 1 year
I'm trying to get a vm running in KVM that acts as if it has its own physical network interface on the local network. I believe this is called bridging, and I've tried following a number of different guides online, with no luck. Everytime I end up with a VM that has a virtual network adapter that cannot talk with my local network.
My host and guests are all running Ubuntu server 16.04.
I started by adding a
br0
to my/etc/network/interfaces
file, which now looks like this:~$ cat /etc/network/interfaces source /etc/network/interfaces.d/* auto lo iface lo inet loopback iface enp2s0 inet manual auto br0 iface br0 inet dhcp bridge_ports enp2s0 bridge_stp off bridge_fd 0 bridge_maxwait 0
After rebooting, my ifconfig looks like this:
~$ ifconfig br0 Link encap:Ethernet HWaddr d0:50:99:c0:25:fb inet addr:192.168.113.2 Bcast:192.168.113.255 Mask:255.255.255.0 ... docker0 Link encap:Ethernet HWaddr 02:42:dc:4f:96:9e ... enp2s0 Link encap:Ethernet HWaddr d0:50:99:c0:25:fb inet6 addr: fe80::d250:99ff:fec0:25fb/64 Scope:Link ... lo Link encap:Local Loopback ... veth009cb0a Link encap:Ethernet HWaddr 66:d6:6c:e7:80:cb inet6 addr: fe80::64d6:6cff:fee7:80cb/64 Scope:Link ... virbr0 Link encap:Ethernet HWaddr 52:54:00:1a:56:65 inet addr:192.168.122.1 Bcast:192.168.122.255 Mask:255.255.255.0 ...
The host has a static entry in my DHCP server, so it always gets 192.168.113.2, so that IP on br0 is correct. As I understand it, all I should need to do now is to start a new vm using the br0 interface. So I run this:
sudo virt-install --virt-type=kvm --name myvm \ --hvm --ram 4096 --vcpus=2 --graphics vnc \ --network bridge=br0 \ --os-type=linux --os-variant=ubuntu16.04 \ --cdrom=/var/lib/libvirt/boot/ubuntu-16.04.2-server-amd64.iso \ --disk path=/var/lib/libvirt/images/myvm.qcow2,size=16,bus=virtio,format=qcow2
I can VNC into the vm and progress through installation at this point, until I get to the "Configuring the network with DHCP" phase, at which point it times out and never gets an IP.
If I use the default NAT interface, it works fine, gets an IP on the 192.168.112.xxx range, and can access the local network and greater internet, no problem. If I then alter the virsh config on this working vm to bridge to br0, then I can't talk to any network, local or otherwise. DHCP fails to get an IP, and setting a static IP yields not traffic on the outside.
In case it is helpful, while the installer was running, I opened another terminal and got more info from the host:
~$ brctl show bridge name bridge id STP enabled interfaces br0 8000.d05099c025fb no enp2s0 vnet0 docker0 8000.0242dc4f969e no veth009cb0a virbr0 8000.5254001a5665 yes virbr0-nic ~$ brctl showmacs br0 port no mac addr is local? ageing timer 1 00:04:20:eb:7e:96 no 3.90 1 00:11:32:63:9c:cf no 1.86 1 30:46:9a:0f:81:cd no 3.39 1 44:8a:5b:9e:d1:90 no 0.00 1 88:de:a9:13:86:48 no 0.29 1 b8:ae:ed:73:3e:ca no 3.89 1 d0:50:99:c0:25:fb yes 0.00 1 d0:50:99:c0:25:fb yes 0.00 1 d0:50:99:e0:21:46 no 2.90 1 f0:f6:1c:e3:7f:be no 173.56 2 fe:54:00:6f:b8:64 yes 0.00 2 fe:54:00:6f:b8:64 yes 0.00 ~$ ip route default via 192.168.113.1 dev br0 172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 192.168.113.0/24 dev br0 proto kernel scope link src 192.168.113.2 192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1 linkdown
I'm willing to throw the entire dumpxml up here if someone wants it, but here's just the network section:
<interface type='bridge'> <mac address='52:54:00:6f:b8:64'/> <source bridge='br0'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface>
UPDATE 2017-03-25: By changing the following:
<interface type='network'> <mac address='52:54:00:6f:b8:64'/> <source network='default'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface>
Then NAT works and I get the 192.168.122.xxx IP, and can talk to external services, etc. So... is there something wrong with my hosts br0? If so, then why does the host get an IP on it just fine? Are there some ethernet devices that just don't support bridging? Here's the result of lspci on the host:
~$ lspci | grep Ethernet 02:00.0 Ethernet controller: Intel Corporation I210 Gigabit Network Connection (rev 03) 03:00.0 Ethernet controller: Intel Corporation I210 Gigabit Network Connection (rev 03)
I haven't setup the second ethernet controller at all, maybe I'll set it up and try to bridge that one instead.
UPDATE 2017-03-25 b: The second interface didn't appear to change the results. Here's the resulting /etc/network/interfaces:
source /etc/network/interfaces.d/* auto lo iface lo inet loopback auto enp2s0 iface enp2s0 inet dhcp auto enp3s0 iface enp3s0 inet manual auto br0 iface br0 inet dhcp bridge_ports enp3s0 bridge_stp off bridge_fd 0 bridge_maxwait 0
Which, when I
ip a
, results in:1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: enp2s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000 link/ether d0:50:99:c0:25:fb brd ff:ff:ff:ff:ff:ff inet 192.168.113.2/24 brd 192.168.113.255 scope global enp2s0 valid_lft forever preferred_lft forever inet6 fe80::d250:99ff:fec0:25fb/64 scope link valid_lft forever preferred_lft forever 3: enp3s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master br0 state UP group default qlen 1000 link/ether d0:50:99:c0:25:fa brd ff:ff:ff:ff:ff:ff inet6 fe80::d250:99ff:fec0:25fa/64 scope link valid_lft forever preferred_lft forever 4: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether d0:50:99:c0:25:fa brd ff:ff:ff:ff:ff:ff inet 192.168.113.100/24 brd 192.168.113.255 scope global br0 valid_lft forever preferred_lft forever inet6 fe80::d250:99ff:fec0:25fa/64 scope link valid_lft forever preferred_lft forever 5: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000 link/ether 52:54:00:1a:56:65 brd ff:ff:ff:ff:ff:ff inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0 valid_lft forever preferred_lft forever 6: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN group default qlen 1000 link/ether 52:54:00:1a:56:65 brd ff:ff:ff:ff:ff:ff 7: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether 02:42:18:2c:73:bb brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 scope global docker0 valid_lft forever preferred_lft forever inet6 fe80::42:18ff:fe2c:73bb/64 scope link valid_lft forever preferred_lft forever 9: vethaa3cd40@if8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default link/ether ae:05:f7:1b:f9:9e brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet6 fe80::ac05:f7ff:fe1b:f99e/64 scope link valid_lft forever preferred_lft forever 10: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master br0 state UNKNOWN group default qlen 1000 link/ether fe:54:00:3a:54:b3 brd ff:ff:ff:ff:ff:ff inet6 fe80::fc54:ff:fe3a:54b3/64 scope link valid_lft forever preferred_lft forever
The VM continues to have the exact same problems when told to use br0.
-
Delorean about 7 yearsThis does not provide an answer to the question. To critique or request clarification from an author, leave a comment below their post. - From Review
-
brakeley about 7 yearsDorian, apologies. It looks like Zanna has edited my post and put my solution into this answer.