Unable to access internet on pod in private GKE cluster
5,824
Nodes in a private GKE cluster do not have external IP addresses, so they cannot communicate with sites outside of Google. https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters#pulling_a_container_image_from_a_registry
Related videos on Youtube
Author by
Jenny
I've used so much software I don't think I could ever recall the entire list...
Updated on September 18, 2022Comments
-
Jenny over 1 year
I'm currently unable to access/ping/connect to any service outside of Google from my private Kubernetes cluster. The pods are running Alpine linux.
Routing Tables
/sleepez/api # ip route show table all default via 10.52.1.1 dev eth0 10.52.1.0/24 dev eth0 scope link src 10.52.1.4 broadcast 10.52.1.0 dev eth0 table local scope link src 10.52.1.4 local 10.52.1.4 dev eth0 table local scope host src 10.52.1.4 broadcast 10.52.1.255 dev eth0 table local scope link src 10.52.1.4 broadcast 127.0.0.0 dev lo table local scope link src 127.0.0.1 local 127.0.0.0/8 dev lo table local scope host src 127.0.0.1 local 127.0.0.1 dev lo table local scope host src 127.0.0.1 broadcast 127.255.255.255 dev lo table local scope link src 127.0.0.1 local ::1 dev lo metric 0 local fe80::ac29:afff:fea1:9357 dev lo metric 0 fe80::/64 dev eth0 metric 256 ff00::/8 dev eth0 metric 256 unreachable default dev lo metric -1 error -101
The pod certainly has an assigned IP and has no problem connecting to it's gateway:
PS C:\...\> kubectl get pods -o wide -n si-dev NAME READY STATUS RESTARTS AGE IP NODE sleep-intel-api-79bf57bd9-c4l8d 1/1 Running 0 52m 10.52.1.4 gke-sez-production-default-pool-74b75ebc-6787
ip addr
output1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 3: eth0@if7: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1460 qdisc noqueue state UP link/ether 0a:58:0a:34:01:04 brd ff:ff:ff:ff:ff:ff inet 10.52.1.4/24 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::ac29:afff:fea1:9357/64 scope link valid_lft forever preferred_lft forever
Pinging Gateway Works
/sleepez/api # ping 10.52.1.1 PING 10.52.1.1 (10.52.1.1): 56 data bytes 64 bytes from 10.52.1.1: seq=0 ttl=64 time=0.111 ms 64 bytes from 10.52.1.1: seq=1 ttl=64 time=0.148 ms 64 bytes from 10.52.1.1: seq=2 ttl=64 time=0.137 ms ^C --- 10.52.1.1 ping statistics --- 3 packets transmitted, 3 packets received, 0% packet loss round-trip min/avg/max = 0.111/0.132/0.148 ms
Pinging 1.1.1.1 Fails
/sleepez/api # ping 1.1.1.1 PING 1.1.1.1 (1.1.1.1): 56 data bytes ^C --- 1.1.1.1 ping statistics --- 6 packets transmitted, 0 packets received, 100% packet loss
System Services Status
PS C:\...\> kubectl get deploy -n kube-system NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE event-exporter-v0.1.7 1 1 1 1 18m heapster-v1.4.3 1 1 1 1 18m kube-dns 2 2 2 2 18m kube-dns-autoscaler 1 1 1 1 18m l7-default-backend 1 1 1 1 18m tiller-deploy 1 1 1 1 14m
Traceroute (Google Internal)
/sleepez/api # traceroute -In 74.125.69.105 1 10.52.1.1 0.007 ms 0.006 ms 0.006 ms 2 * * * 3 * * * 4 * *
Traceroute (External)
traceroute to 1.1.1.1 (1.1.1.1), 30 hops max, 46 byte packets 1 10.52.1.1 0.009 ms 0.003 ms 0.004 ms 2 * * * 3 * * * [continues...]
-
kasperd about 6 yearsIncluding a traceroute from the VM to an IP address outside of Google as well as a traceroute from outside of Google to the external IP address of your VM will make this problem a lot easier to debug.
-
Jenny about 6 years@kasperd I included the traceroutes from inside the pod to an internal Google. I'm not sure how the external traceroute will help since it terminates at the Google-managed K8s cluster...
-
kasperd about 6 yearsThe other traceroute I was asking for was from an external network to the external IP address of your VM.
-
Jenny about 6 yearsWith a private GKE cluster the Compute nodes don't receive an external IP address. I can add one to provide the debugging info, but I still doubt that that is the issue
-
-
Jenny about 6 yearsThis seems to be the correct answer and I wish this was highlighted a bit more in their documentation.
-
Tarek about 5 yearsu should use a nat gateway like Cloud NAT to redirect traffic in ur VPC through it. the cloud NAT runs outside ur cluster but on the same network.
-
k_vishwanath over 4 years@Tarek could you please provide the steps to enable NAT on private GKE.
-
Jenny about 4 yearsIn retrospect to this question, I'm dumb, but I hope it helps others.