kubernetes unhealthy ingress backend

24,489

Solution 1

You need to add a readinessProbe (just copy your livenessProbe).

It's explained in the GCE L7 Ingress Docs.

Health checks

Currently, all service backends must satisfy either of the following requirements to pass the HTTP health checks sent to it from the GCE loadbalancer: 1. Respond with a 200 on '/'. The content does not matter. 2. Expose an arbitrary url as a readiness probe on the pods backing the Service.

Also make sure that the readinessProbe is pointing to the same port that you expose to the Ingress. In your case that's fine since you have only one port, if you add another one you may run into trouble.

Solution 2

I thought it's worth noting that this is a quite important limitation in the documentation:

Changes to a Pod's readinessProbe do not affect the Ingress after it is created.

After adding my readinessProbe I basically deleted my ingress (kubectl delete ingress <name>) and then applied my yaml file again to re-create it and shortly after everything was working again.

Solution 3

I was having the same issue. Followed Tex's tip but continued to see that message. It turns out I had to wait a few minutes before ingress to validate the service health. If someone is going through the same and done all the steps like readinessProbe and linvenessProbe, just ensure your ingress is pointing to a service that is either a NodePort, and wait a few minutes until the yellow warning icon turns into a green one. Also, check the log on StackDriver to get a better idea of what's going on.

Solution 4

I was also having exactly the same issue, after updating my ingress readinessProbe.

I can see Ingress status labeled Some backend services are in UNKNOWN state status in yellow. I waited for more than 30 min, yet the changes were not reflected.

After more than 24 hours the changes reflected and status turned green. I didn't get any official documentation for this but seems like a bug in GCP Ingress resource.

Share:
24,489
Will Pink
Author by

Will Pink

Updated on January 18, 2022

Comments

  • Will Pink
    Will Pink over 2 years

    I followed the load balancer tutorial: https://cloud.google.com/container-engine/docs/tutorials/http-balancer which is working fine when I use the Nginx image, when I try and use my own application image though the backend switches to unhealthy.

    My application redirects on / (returns a 302) but I added a livenessProbe in the pod definition:

        livenessProbe:
          httpGet:
            path: /ping
            port: 4001
            httpHeaders:
              - name: X-health-check
                value: kubernetes-healthcheck
              - name: X-Forwarded-Proto
                value: https
              - name: Host
                value: foo.bar.com
    

    My ingress looks like:

    apiVersion: extensions/v1beta1
    kind: Ingress
    metadata:
      name: foo
    spec:
      backend:
        serviceName: foo
        servicePort: 80
      rules:
      - host: foo.bar.com
    

    Service configuration is:

    kind: Service
    apiVersion: v1
    metadata:
      name: foo
    spec:
      type: NodePort
      selector:
        app: foo
      ports:
        - port: 80 
          targetPort: 4001
    

    Backends health in ingress describe ing looks like:

    backends:       {"k8s-be-32180--5117658971cfc555":"UNHEALTHY"}
    

    and the rules on the ingress look like:

    Rules:
      Host  Path    Backends
      ----  ----    --------
      * *   foo:80 (10.0.0.7:4001,10.0.1.6:4001)
    

    Any pointers greatly received, I've been trying to work this out for hours with no luck.

    Update

    I have added the readinessProbe to my deployment but something still appears to hit / and the ingress is still unhealthy. My probe looks like:

        readinessProbe:
          httpGet:
            path: /ping
            port: 4001
            httpHeaders:
              - name: X-health-check
                value: kubernetes-healthcheck
              - name: X-Forwarded-Proto
                value: https
              - name: Host
                value: foo.com
    

    I changed my service to:

    kind: Service
    apiVersion: v1
    metadata:
      name: foo
    spec:
      type: NodePort
      selector:
        app: foo
      ports:
        - port: 4001
          targetPort: 4001
    

    Update2

    After I removed the custom headers from the readinessProbe it started working! Many thanks.