Pings from VPN network to VPN client work; pings into from VPN client to VPN network fail - why?
Solution 1
The root cause of this problem were some implicit default routes that were not visible in the tables displayed by /sbin/route but were visible in tables displayed by /sbin/ip route and /sbin/ip rule.
Then these tables were displayed it became apparent that a rule of this kind:
default table route_eth0 via 10.11.11.1 dev eth0
was overriding this rule:
10.8.0.0 10.11.11.2 255.255.255.0 UG 0 0 0 eth0
By editing /etc/sysconfig/network-scripts/route-eth0 (presumably with /sbin/ip route, though did it manually in this case), I was able to fix the issue.
So, what I learnt from this is that /sbin/route can't be relied upon to give you an accurate picture of Linux's effective routing rules and that it is better to use /sbin/ip for this purpose.
Thanks to ptman whose answer to this question helped me see the light. Thank you ptman!
Solution 2
What about your iptables rules? They look rather empty.
I use the following rules, I am not sure if it would solve your exact problem though:
# Allow TUN interface connections to OpenVPN server iptables -A INPUT -i tun+ -j ACCEPT # Allow TUN interface connections to be forwarded through other interfaces iptables -A FORWARD -i tun+ -j ACCEPT iptables -A FORWARD -o tun+ -j ACCEPT # Allow TUN interface connections to get out iptables -A OUTPUT -o tun+ -j ACCEPT # We want to allow routing from OpenVPN tunnels $IPTABLES -t nat -A POSTROUTING -o eth1 -s 10.8.1.0/255.255.255.0 -j MASQUERADE $IPTABLES -A FORWARD -i tun+ -o eth1 -s 10.8.1.0/255.255.255.0 -j ACCEPT
On the gateway you need a routing entry to direct traffic for 10.8.1.0/24 to the openvpn server.
On the openvpn server traffic for 10.8.1.0/24 subnet uses the IP address of the openvpn server's tun interface, for example 10.8.1.2. This though should already be configured by openvpn itself.
Update: I had to edit a few things, I use a setup here with 2 openvpn servers that also communicate with eachother. So I mixed up some things that aren't relevant for your situation.
jonseymour
Updated on September 18, 2022Comments
-
jonseymour over 1 year
We are in the process of setting up an OpenVPN server for some servers running in a cloud. We are stumped with a connectivity problem whereby the hosts on the VPN server's LAN can ping te VPN client, but the reverse is not true.
The VPN client can ping the VPN server on its VPN address, but not on its LAN address.
tcpdump shows evidence of ping packets from the client reaching the host and replies being issued, but for some reason the replies never reach the tun0 interface on the VPN server or the client. Conversely, traffic for the ping requests from the VPN server's LAN to the VPN client are seen on all the expected interfaces, according to tcpdump.
A detailed description of our configuration and troubleshooting to date is given below.
The problem appears to be related to forwarding from addresses on the server's network back to the client network. What is really odd (to me) is that the LAN initiated pings can do the full round trip, but client initiated pings seem to get dropped somewhere between the VPN server's tun0 and eth1 interfaces.
What are we missing?
Situation:
3 hosts:
- VPN client (tun0: 10.8.0.22)
- VPN server (tun0: 10.8.0.1, eth1: 10.11.11.2, eth0: x.x.x.x)
- LAN server (eth0: 10.11.11.7)
Both servers are virtual machines, running RHEL 5.7. I think (but am not entirely sure) that the virtual hosting environment is VMWare.
Tests
- VPN client has established tunnel to VPN server, via the VPN server's eth0 interface
- VPN client can ping the VPN server on its tun0 interface 10.8.0.1
- VPN server can ping 10.8.0.22
- LAN server can ping 10.8.0.22
but:
- VPN client cannot the ping VPN server on its eth1 interface 10.11.11.2
- VPN client cannot the ping LAN server on its eth0 interface 10.11.11.7
For the ping test between 10.11.11.7 and 10.8.0.22:
- tcpdump shows ping requests and replies traversing tun0 on VPN server
- tcpdump shows ping requests and replies traversing eth1 on VPN server
- tcpdump shows ping requests and replies traversing eth0 on LAN server
For the ping test between 10.11.11.2 and 10.8.0.22:
- tcpdump shows ping requests and replies traversing tun0 on VPN server
For the ping test between 10.8.0.22 and 10.11.11.2:
- tcpdump shows ping requests traversing tun0 on VPN server
- tcpdump shows ping replies traversing eth1 on VPN server
- there is no trace of the reply on the tun0 interface
For the ping test between 10.8.0.22 and 10.11.11.7:
- tcpdump shows ping requests traversing tun0 on VPN server
- tcpdump shows ping requests traversing eth1 on VPN server
- tcpdump shows ping requests traversing eth0 on LAN server
- tcpdump shows ping replies traversing eth0 on LAN server
- there is no trace of the reply on either the tun0 or eth1 interfaces of the VPN server
ip_fowarding has been enabled on the VPN server rp_filter has been disabled on the VPN server for all interfaces except the internet facing interface, eth0.
iptables has been disabled with (default ACCEPT) rules on the client for the purposes of debugging the underlying issue.
I have included dumps of the route -n and ifconfig for the relevant interfaces on each host.
On the OpenVPN server
$ /sbin/route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 10.8.0.2 0.0.0.0 255.255.255.255 UH 0 0 0 tun0 10.11.11.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 10.8.0.0 10.8.0.2 255.255.255.0 UG 0 0 0 tun0 x.x.x.x 0.0.0.0 255.255.248.0 U 0 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth1 0.0.0.0 x.x.x.x 0.0.0.0 UG 0 0 0 eth0 $ find /proc/sys/net -name 'rp_filter' | while read f > do echo $f $(cat $f) > done /proc/sys/net/ipv4/conf/tun0/rp_filter 0 /proc/sys/net/ipv4/conf/eth1/rp_filter 0 /proc/sys/net/ipv4/conf/eth0/rp_filter 1 /proc/sys/net/ipv4/conf/lo/rp_filter 0 /proc/sys/net/ipv4/conf/default/rp_filter 0 /proc/sys/net/ipv4/conf/all/rp_filter 0 $ cat /proc/sys/net/ipv4/ip_forward 1 $ sudo /sbin/iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination $ /sbin/ifconfig -a eth0 Link encap:Ethernet HWaddr DE:AD:BE:A6:28:21 inet addr:x.x.x.x Bcast:x.x.x.x Mask:255.255.248.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:233929 errors:0 dropped:0 overruns:0 frame:0 TX packets:24776 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:27881415 (26.5 MiB) TX bytes:30534780 (29.1 MiB) eth1 Link encap:Ethernet HWaddr DE:AD:BE:3B:24:48 inet addr:10.11.11.2 Bcast:10.11.11.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:4929 errors:0 dropped:0 overruns:0 frame:0 TX packets:10209 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:423658 (413.7 KiB) TX bytes:863546 (843.3 KiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:11992 errors:0 dropped:0 overruns:0 frame:0 TX packets:11992 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:34820967 (33.2 MiB) TX bytes:34820967 (33.2 MiB) tun0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:10.8.0.1 P-t-P:10.8.0.2 Mask:255.255.255.255 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1500 Metric:1 RX packets:69 errors:0 dropped:0 overruns:0 frame:0 TX packets:57 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:5796 (5.6 KiB) TX bytes:4788 (4.6 KiB) $ uname -a Linux vhost0273 2.6.18-274.el5 #1 SMP Fri Jul 8 17:36:59 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux $ ping -c1 10.8.0.22 -w 1 PING 10.8.0.22 (10.8.0.22) 56(84) bytes of data. 64 bytes from 10.8.0.22: icmp_seq=1 ttl=64 time=145 ms --- 10.8.0.22 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 145.676/145.676/145.676/0.000 ms $ ping -c1 10.11.11.7 -w 1 PING 10.11.11.7 (10.11.11.7) 56(84) bytes of data. 64 bytes from 10.11.11.7: icmp_seq=1 ttl=64 time=0.794 ms --- 10.11.11.7 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.794/0.794/0.794/0.000 ms
On a host on the server LAN:
$ /sbin/ifconfig -a eth0 Link encap:Ethernet HWaddr DE:AD:BE:7F:45:72 inet addr:10.11.11.7 Bcast:10.11.11.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:33897 errors:0 dropped:0 overruns:0 frame:0 TX packets:38294 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2536157 (2.4 MiB) TX bytes:8910725 (8.4 MiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:77779 errors:0 dropped:0 overruns:0 frame:0 TX packets:77779 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 $ /sbin/route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 10.11.11.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 10.8.0.0 10.11.11.2 255.255.255.0 UG 0 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0 0.0.0.0 10.11.11.2 0.0.0.0 UG 0 0 0 eth0 $ ping -c1 10.8.0.1 -w 1 PING 10.8.0.1 (10.8.0.1) 56(84) bytes of data. 64 bytes from 10.8.0.1: icmp_seq=1 ttl=64 time=0.516 ms --- 10.8.0.1 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.516/0.516/0.516/0.000 ms $ ping -c1 10.8.0.22 -w 1 PING 10.8.0.22 (10.8.0.22) 56(84) bytes of data. 64 bytes from 10.8.0.22: icmp_seq=1 ttl=63 time=146 ms --- 10.8.0.22 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 146.913/146.913/146.913/0.000 ms $ ping -c1 10.11.11.2 -w 1 PING 10.11.11.2 (10.11.11.2) 56(84) bytes of data. 64 bytes from 10.11.11.2: icmp_seq=1 ttl=64 time=0.775 ms --- 10.11.11.2 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.775/0.775/0.775/0.000 ms
On the VPN client
tun0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:10.8.0.22 P-t-P:10.8.0.21 Mask:255.255.255.255 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) $ /sbin/route -n | grep ^10 10.8.0.21 0.0.0.0 255.255.255.255 UH 0 0 0 tun0 10.8.0.1 0.0.0.0 255.255.255.255 UH 0 0 0 tun0 10.0.1.0 0.0.0.0 255.255.255.0 U 2 0 0 wlan0 10.1.1.0 0.0.0.0 255.255.255.0 U 1 0 0 eth0 10.11.11.0 10.8.0.1 255.255.255.0 UG 0 0 0 tun0 $ ping 10.8.0.1 PING 10.8.0.1 (10.8.0.1) 56(84) bytes of data. 64 bytes from 10.8.0.1: icmp_seq=1 ttl=64 time=145 ms $ ping 10.8.0.2 -w 1 PING 10.8.0.2 (10.8.0.2) 56(84) bytes of data. --- 10.8.0.2 ping statistics --- 1 packets transmitted, 0 received, 100% packet loss, time 0ms $ ping 10.11.11.2 -w 1 PING 10.11.11.2 (10.11.11.2) 56(84) bytes of data. --- 10.11.11.2 ping statistics --- 1 packets transmitted, 0 received, 100% packet loss, time 0ms $ ping 10.11.11.7 -w 1 PING 10.11.11.7 (10.11.11.7) 56(84) bytes of data. --- 10.11.11.7 ping statistics --- 2 packets transmitted, 0 received, 100% packet loss, time 999ms
-
Admin about 12 yearsI have made some additional discoveries regarding this issue. On the ping flow that works 10.11.11.7 -> 10.8.0.22, the packets leaving 10.11.11.7 are framed with a MAC destination of 10.11.11.2 (which is expected). However, on the ping flow that doesn't work, the reply packets from 10.11.11.7 -> 10.8.0.22 are framed with a MAC destination that corresponds to 10.11.11.1. Why this occurs, is not obvious. For example, I haven't found any arp traces in which the bad MAC address is claiming to own 10.11.11.2.
-
Admin about 12 yearsFor reference, I have asked the question about the MAC addresses here.
-
Admin about 12 yearsI would think that 10.11.11.1 is the gateway of the openvpn server isn't it? The openvpnserver doesn't know what to do with the packets for 10.8.0.22 from the 10.11.11.0/24 subnet so it sends them to the default gateway, which in turn probably drops them, unless it knows how to route them (unlikely in this case). Please implement the iptables rules I provided and then try again. It may not work right away but we can go from there.
-
Admin about 12 years@aseq - in this example x.x.x.x was the gateway of the VPN server (I have masked the actual address). 10.11.11.1 is a gateway to another 10.x.x.x network but wasn't a useful gateway to the network I needed to reach. So, 10.11.11.7 needed to route responses back to 10.8.0.22 via 10.11.11.2 (the VPN server). As it was, an adapter specific route was causing these packets to flow to 10.11.11.1 which was the wrong way to redirect them.
-
jonseymour about 12 yearsThanks for the reply. They are empty, but they shouldn't be preventing the ping flows. In particular, ping flows from the server side to the client side and back work ok, but ping flows from the client side to the server side get dropped (somewhere near the VPN server) on the way back.
-
aseq about 12 yearsSee my answer, will that work for you?
-
jonseymour about 12 yearsI am not sure this is relevant to my case. I don't actually need to route to the target network (it is available via eth1 of the VPN server). I currently have iptables switched off, to remove it from the issue (e.g. INPUT, FORWARD, OUTPUT chains all set to ACCEPT). Added a comment about about further discoveries.
-
aseq about 12 yearsI think you will have to use specific iptables rules in order to allow one openvpn client to reach another. In addition you will need iptables to allow an openvpn client to reach the openvpn server's internal interface.
-
Konerak about 12 yearsGood find, and thanks for documenting this here! You can accept your own answer, which seems appropriate here.