Does a bridge between 2 TAP interfaces need an IP address?
Do I need to have an ip configured on the bridge interface at all? I don't quite see the reason for it as all it will do is to make to virtual interfaces talk to each other
No, a pure bridge only works at Ethernet level – it doesn't even look at the IP header. When you assign an IP address to br0, you're really assigning it to the host OS, which is connected to that bridge.
However, you haven't said anything about actually adding the tap
interfaces as bridge ports. You need to explicitly tell the bridge which ports it manages:
brctl addif br0 tap0
ip link set tap0 master br0
Is the fact that the bridge interface does not have an assigned ip/netmask/broadcast the reason I am not able to see the ping traffic on the bridge interface when tcpdumping that interface?
No.
But it's possible that the bridge is still in "learning" mode; it still defaults to 30 seconds – check using brctl showstp br0
. It's possible that the ports weren't added to the bridge (see above). It's possible that the port interfaces themselves are still down.
(Also, dear gods why do people think they need to set the broadcast addresss. Really, the OS can already calculate it from IP | ~netmask. It's almost never useful to configure the broadcast address manually, it just becomes easier to accidentally get it wrong.)
If the answer to number 2 is Yes, I assume that it is not either possible to use iptables to block/allow traffic on that interface, correct? If so is there any other way to accomplish what one would do with iptables on an interface like that?
Yes, you need ebtables
to filter traffic; it does not go through the IP firewall.
(Though, I guess that's not always true. The Untangle firewall, for example, seems to work in a cross-breed router/bridge mode, which is somewhat confusing.)
Related videos on Youtube
ByteFlinger
Updated on September 18, 2022Comments
-
ByteFlinger over 1 year
I am trying to setup openvpn on a machine so that I have 2 different tap interfaces (tap0 and tap1) and then a bridge connecting those interfaces. The openvpn is setup with a server-bridge configuration for each TAP interface. The idea is that a client on tap0 will be able to talk to a client on tap1 and vice-versa.
There is no physical NIC involved in the bridge and the Bridge interface is setup with no ip/netmask/broadcast. Just brought up with a single "ifconfig brX up"
I am able to ping between the 2 clients when they are both connected to the openvpn server, each to its own tap interface mentioned above however I see no traffic when trying to tcpdump the bridge interface.
I am a bit confused on some things:
Do I need to have an ip configured on the bridge interface at all? I don't quite see the reason for it as all it will do is to make to virtual interfaces talk to each other
Is the fact that the bridge interface does not have an assigned ip/netmask/broadcast the reason I am not able to see the ping traffic on the bridge interface when tcpdumping that interface?
If the answer to number 2 is Yes, I assume that it is not either possible to use iptables to block/allow traffic on that interface, correct? If so is there any other way to accomplish what one would do with iptables on an interface like that?
-
Marki555 almost 9 yearsdo you have ip forwarding and proxy_arp enabled?
-
stackoverflower over 8 years@ByteFlinger, thanks for you answer. Could you explain more that why containers suffer from this problem? I have a similar problem here stackoverflow.com/q/31904089/842860, thanks
-
ByteFlinger over 8 years@stackoverflower Sorry but not enough knowledge of containers to know why. I guess in my case all containers were able to talk to each other therefore it picked the easiest route between them which was through eth0 rather than my bridge. I ought to try disabling container-to-containet communication some time and try again to see if I am able to go through the bridge instead
-
stackoverflower over 8 years@ByteFlinger, I think the bridge's routing policy is still influenced by the host's iptable, in particular the source IP NAT.
Docker
by default adds aMASQURADE
rule to thePOSTROUTE
table with source172.17.0.0/16
, so any packets from the172.17.0.0/16
network will have their source IP changed, which can cause the receiver to send response to the wrong IP. This was the problem I had.