how to connect a kubernetes pod to the outside world without a forwarding rule (google container engine)
TL;DR: Use the Internal IP of your node as the public IP in your service definition.
If you enable verbose logging on the kube-proxy you will see that it appears to be creating the appropriate IP tables rule:
I0602 04:07:32.046823 24360 roundrobin.go:98] LoadBalancerRR service "default/app-frontend-service:" did not exist, created
I0602 04:07:32.047153 24360 iptables.go:186] running iptables -A [KUBE-PORTALS-HOST -t nat -m comment --comment default/app-frontend-service: -p tcp -m tcp -d 10.119.244.130/32 --dport 80 -j DNAT --to-destination 10.240.121.42:36970]
I0602 04:07:32.048446 24360 proxier.go:606] Opened iptables from-host portal for service "default/app-frontend-service:" on TCP 10.119.244.130:80
I0602 04:07:32.049525 24360 iptables.go:186] running iptables -C [KUBE-PORTALS-CONTAINER -t nat -m comment --comment default/app-frontend-service: -p tcp -m tcp -d 23.251.156.36/32 --dport 80 -j REDIRECT --to-ports 36970]
I0602 04:07:32.050872 24360 iptables.go:186] running iptables -A [KUBE-PORTALS-CONTAINER -t nat -m comment --comment default/app-frontend-service: -p tcp -m tcp -d 23.251.156.36/32 --dport 80 -j REDIRECT --to-ports 36970]
I0602 04:07:32.052247 24360 proxier.go:595] Opened iptables from-containers portal for service "default/app-frontend-service:" on TCP 23.251.156.36:80
I0602 04:07:32.053222 24360 iptables.go:186] running iptables -C [KUBE-PORTALS-HOST -t nat -m comment --comment default/app-frontend-service: -p tcp -m tcp -d 23.251.156.36/32 --dport 80 -j DNAT --to-destination 10.240.121.42:36970]
I0602 04:07:32.054491 24360 iptables.go:186] running iptables -A [KUBE-PORTALS-HOST -t nat -m comment --comment default/app-frontend-service: -p tcp -m tcp -d 23.251.156.36/32 --dport 80 -j DNAT --to-destination 10.240.121.42:36970]
I0602 04:07:32.055848 24360 proxier.go:606] Opened iptables from-host portal for service "default/app-frontend-service:" on TCP 23.251.156.36:80
Listing the iptables entries using -L -t
shows the public IP turned into the reverse DNS name like you saw:
Chain KUBE-PORTALS-CONTAINER (1 references)
target prot opt source destination
REDIRECT tcp -- anywhere 10.119.240.2 /* default/kubernetes: */ tcp dpt:https redir ports 50353
REDIRECT tcp -- anywhere 10.119.240.1 /* default/kubernetes-ro: */ tcp dpt:http redir ports 54605
REDIRECT udp -- anywhere 10.119.240.10 /* default/kube-dns:dns */ udp dpt:domain redir ports 37723
REDIRECT tcp -- anywhere 10.119.240.10 /* default/kube-dns:dns-tcp */ tcp dpt:domain redir ports 50126
REDIRECT tcp -- anywhere 10.119.244.130 /* default/app-frontend-service: */ tcp dpt:http redir ports 36970
REDIRECT tcp -- anywhere 36.156.251.23.bc.googleusercontent.com /* default/app-frontend-service: */ tcp dpt:http redir ports 36970
But adding the -n
option shows the IP address (by default, -L
does a reverse lookup on the ip address, which is why you see the DNS name):
Chain KUBE-PORTALS-CONTAINER (1 references)
target prot opt source destination
REDIRECT tcp -- 0.0.0.0/0 10.119.240.2 /* default/kubernetes: */ tcp dpt:443 redir ports 50353
REDIRECT tcp -- 0.0.0.0/0 10.119.240.1 /* default/kubernetes-ro: */ tcp dpt:80 redir ports 54605
REDIRECT udp -- 0.0.0.0/0 10.119.240.10 /* default/kube-dns:dns */ udp dpt:53 redir ports 37723
REDIRECT tcp -- 0.0.0.0/0 10.119.240.10 /* default/kube-dns:dns-tcp */ tcp dpt:53 redir ports 50126
REDIRECT tcp -- 0.0.0.0/0 10.119.244.130 /* default/app-frontend-service: */ tcp dpt:80 redir ports 36970
REDIRECT tcp -- 0.0.0.0/0 23.251.156.36 /* default/app-frontend-service: */ tcp dpt:80 redir ports 36970
At this point, you can access the service from within the cluster using both the internal and external IPs:
$ curl 10.119.244.130:80
app-frontend-5pl5s
$ curl 23.251.156.36:80
app-frontend-5pl5s
Without adding a firewall rule, attempting to connect to the public ip remotely times out. If you add a firewall rule then you will reliably get connection refused:
$ curl 23.251.156.36
curl: (7) Failed to connect to 23.251.156.36 port 80: Connection refused
If you enable some iptables logging:
sudo iptables -t nat -I KUBE-PORTALS-CONTAINER -m tcp -p tcp --dport
80 -j LOG --log-prefix "WTF: "
And then grep the output of dmesg
for WTF
it's clear that the packets are arriving on the 10. IP address of the VM rather than the ephemeral external IP address that had been set as the public IP on the service.
It turns out that the problem is that GCE has two types of external IPs: ForwardingRules (which forward with the DSTIP intact) and 1-to-1 NAT (which actually rewrites the DSTIP to the internal IP). The external IP of the VM is the later type so when the node receives the packets the IP tables rule doesn't match.
The fix is actually pretty simple (but non-intuitive): Use the Internal IP of your node as the public IP in your service definition. After updating your service.yaml file to set publicIPs to the Internal IP (e.g. 10.240.121.42
) you will be able to hit your application from outside of the GCE network.
Reese
Updated on June 19, 2022Comments
-
Reese almost 2 years
I'm using Google's Container Engine service, and got a pod running a server listening on port 3000. I set up the service to connect port 80 to that pod's port 3000. I am able to curl the service using its local and public ip from within the node, but not from outside. I set up a firewall rule to allow port 80 and send it to the node, but I keep getting 'connection refused' from outside the network. I'm trying to do this without a forwarding rule, since there's only one pod and it looked like forwarding rules cost money and do load balancing. I think the firewall rule works, because when I add the
createExternalLoadBalancer: true
to the service's spec, the external IP created by the forwarding rule works as expected. Do I need to do something else? Set up a route or something?controller.yaml
kind: ReplicationController apiVersion: v1beta3 metadata: name: app-frontend labels: name: app-frontend app: app role: frontend spec: replicas: 1 selector: name: app-frontend template: metadata: labels: name: app-frontend app: app role: frontend spec: containers: - name: node-frontend image: gcr.io/project_id/app-frontend ports: - name: app-frontend-port containerPort: 3000 targetPort: 3000 protocol: TCP
service.yaml
kind: Service apiVersion: v1beta3 metadata: name: app-frontend-service labels: name: app-frontend-service app: app role: frontend spec: ports: - port: 80 targetPort: app-frontend-port protocol: TCP publicIPs: - 123.45.67.89 selector: name: app-frontend
Edit (additional details): Creating this service adds these additional rules, found when I run
iptables -L -t nat
Chain KUBE-PORTALS-CONTAINER (1 references) target prot opt source destination REDIRECT tcp -- anywhere 10.247.247.206 /* default/app-frontend-service: */ tcp dpt:http redir ports 56859 REDIRECT tcp -- anywhere 89.67.45.123.bc.googleusercontent.com /* default/app-frontend-service: */ tcp dpt:http redir ports 56859 Chain KUBE-PORTALS-HOST (1 references) target prot opt source destination DNAT tcp -- anywhere 10.247.247.206 /* default/app-frontend-service: */ tcp dpt:http to:10.241.69.28:56859 DNAT tcp -- anywhere 89.67.45.123.bc.googleusercontent.com /* default/app-frontend-service: */ tcp dpt:http to:10.241.69.28:56859
I don't fully understand iptables, so I'm not sure how the destination port matches my service. I found that the DNS for
89.67.45.123.bc.googleusercontent.com
resolves to123.45.67.89
.kubectl get services shows the IP address and port I specified:
NAME IP(S) PORT(S) app-frontend-service 10.247.243.151 80/TCP 123.45.67.89
Nothing recent from external IPs is showing up in /var/log/kube-proxy.log
-
Reese almost 9 yearsI think I already followed the steps you mentioned. As you can see in my service.yaml, the service has a
publicIPs
field, which is set to the external ip of the node. -
Alex Robinson almost 9 yearsThe firewall rule for port 80 and adding the node's external IP to the service's
publicIPs
field should be all that's needed to make it work. Once you've double checked that the IP in the service is the node's IP and that it's the same IP as the one you're testing against, the next place to look is to runsudo iptables -L -t nat
on the node to see if there's an iptables rule referring to the IP address. Assuming there is, check /var/log/kube-proxy.log for any details on why requests to that service aren't being routed. -
Reese almost 9 yearsConfirmed that node's listed external IP matches that in service spec. I didn't see the listed IP in the iptables, but I did see it backwards as a prefix for the destination domain. I've updated my question with some additional details
-
Reese almost 9 yearsTo test the NodePort feature you are talking about, can I upgrade the cluster to use the latest version? Or do I need to start a new cluster?
-
Reese almost 9 yearsThank you, I got it working with the ip returned from
ifconfig
on the node. I wasn't sure how to enable verbose logging, or how else to get it. Is this applicable to multi-node scenarios? If all the nodes' IPs were in the list, would it send it to whichever one had a container listening on that port? What would it do if two containers on two different nodes were listening on that port? -
Robert Bailey almost 9 yearsTo enable verbose logging, edit
/etc/default/kube-proxy
and change--v=2
to--v=4
. Then runsudo service kube-proxy restart
. The log file is written to/var/log/kube-proxy.log
. -
Robert Bailey almost 9 yearsThis is applicable to multi-node scenarios in the way that Alex described below -- you can add multiple publicIPs entries to your service and send requests to all of the nodes in your cluster (e.g. using DNS round robin). The kube-proxy on each node intercepts the requests and redirects them to the appropriate pods running in the cluster.
-
Robert Bailey almost 9 yearsYou will need to use different ports or different publicIPs for different externally exposed services (although pods can reuse the same port numbers inside the cluster because each pod gets a distinct IP). This is the advantage of using the GCE external load balancer -- since each service gets a different IP, you can use the same port (e.g. port 80 or 443) on all of your services.
-
CESCO about 8 years@RobertBailey it does not seem to work like this anymore.If I set the internal ip as external on the service definition I cant access my app from nowhere except inside my cluster. Brand new GCE cluster running Kubernetes 1.1.7 btw. Shame LoadBalancing cost more than the vm itself, it fucks my tests.
-
Tim Hockin about 8 yearsI can not repro this at head. I set a Service with an LB and an externalIP of the VM's intra-GCE IP and it works. Maybe you can send me your YAML? You can email me if you don't want to post it here.
-
nambrot over 7 yearsIt looks like this answer does not apply anymore? Setting the internal ip of the node as the loadbalancerIp nor the externalIP seems to work for me
-
lowercase00 over 3 yearsThis only works if you're not going to scale your pods (they might endup in a different externalIp). I would think that it would work also in case of reconstruction (pod dying and being recreated, as I would think it could be created somewhere else).