Nginx Ingress Controller - Failed Calling Webhook

45,043

Solution 1

I am not sure if this helps this late, but might it be, that your cluster was behind proxy? Because in that case you have to have no_proxy configured correctly. Specifically, it has to include .svc,.cluster.local otherwise validation webhook requests such as https://ingress-nginx-controller-admission.ingress-nginx.svc:443/extensions/v1beta1/ingresses?timeout=30s will be routed via proxy server (note that .svc in the URL).

I had exactly this issue and adding .svc into no_proxy variable helped. You can try this out quickly by modifying /etc/kubernetes/manifests/kube-apiserver.yaml file which will in turn automatically recreate your kubernetes api server pod.

This is not the case just for ingress validation, but also for other things that might refer URL in your cluster ending with .svc or .namespace.svc.cluster.local (i.e. see this bug)

Solution 2

Another option you have is to remove the Validating Webhook entirely:

kubectl delete -A ValidatingWebhookConfiguration ingress-nginx-admission

I found I had to do that on another issue, but the workaround/solution works here as well.

This isn't the best answer; the best answer is to figure out why this doesn't work. But at some point, you live with workarounds.

I'm installing on Docker for Mac, so I used the cloud rather than baremetal version:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.34.1/deploy/static/provider/cloud/deploy.yaml

Solution 3

In my case I'd mixed the installations up. I resolved the issue by executing the following steps:

$ kubectl get validatingwebhookconfigurations 

I iterated through the list of configurations received from the above steps and deleted the configuration using

$ `kubectl delete validatingwebhookconfigurations [configuration-name]`

Solution 4

In my case I didn't need to delete the ValidatingWebhookConfiguration. The issue was that I was using a private cluster on GCP version 1.17.14-gke.1600. If I got it correctly, on a default Kubernetes installation, the valitaingwebhook API (which of course is running on the master node), is exposed at port 443. But with GCP they changed the port to 8443 due to security reasons because in order to allocate port 443, the service needs to have root access to the node. Since they didn't want that, they changed to 8443. Now, since a private cluster only has the ports 80/443 externally allowed for Ingress on the nodes (that is, all the nodes will only accept requests to these ports), when the Kubernetes tries to validate your Ingress against the validatingwebhook-address:8443 it will fail - it would not fail if it ran on 443. This thread contains more detailed information.

So the current workaround for that, as recommended by Google itself (but very poorly documented) is adding a Firewall rule on GCP, that will allow inbound (Ingress) TCP requests to your master node at port 8443, so that the other nodes within the cluster can reach the master for validatingwebhook API running on it with that very port.

As to how to create the rule, this is how I did it:

  1. Went to Firewall Rules and added a new one.
  2. At the field Network I selected the VPC from which my cluster is.
  3. Direction of traffic I set as Ingress
  4. Action on match to Allow
  5. Targets to Specified target tags
  6. The Target tags can be found on the master node details in a property called Network tags. To find it, I opened a new window, went to my cluster node pools, found the master node pool. Then entered one of the nodes to look for the Virtual Machine details. There I found Network Tags. Copied its value and went back to the Firewall Rule form.
  7. Pasted the copied network tag to the tag field
  8. At Protocols and ports, checked the Specified protocols and ports
  9. Then checked TCP and placed 8443
  10. Saved the rule and applied the manifest again.

NOTE: Most threads out there will say it's the port 9443. It may work. But I first attempted 8443 since it was reported to work on this thread. It worked for me so I didn't even try 9443.

Solution 5

I've solved this issue. The problem was that you use Kubernetes version 1.18, but the ValidatingWebhookConfiguration in current ingress-Nginx uses the oldest API; see the doc: https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#prerequisites

Ensure that the Kubernetes cluster is at least as new as v1.16 (to use admissionregistration.k8s.io/v1), or v1.9 (to use admissionregistration.k8s.io/v1beta1).

And in current yaml :

 # Source: ingress-nginx/templates/admission-webhooks/validating-webhook.yaml
    # before changing this value, check the required kubernetes version
    # https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#prerequisites
apiVersion: admissionregistration.k8s.io/v1beta1

and in rules :

apiVersions:
          - v1beta1

So you need to change it on v1 :

apiVersion: admissionregistration.k8s.io/v1

and add rule -v1 :

apiVersions:
          - v1beta1
          - v1

After you change it and redeploy -your custom ingress service will deploy sucessfull

Share:
45,043
PhotonTamer
Author by

PhotonTamer

Updated on February 13, 2022

Comments

  • PhotonTamer
    PhotonTamer over 2 years

    I set up a k8s cluster using kubeadm (v1.18) on an Ubuntu virtual machine. Now I need to add an Ingress Controller. I decided for nginx (but I'm open for other solutions). I installed it according to the docs, section "bare-metal":

    kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-0.31.1/deploy/static/provider/baremetal/deploy.yaml

    The installation seems fine to me:

    kubectl get all -n ingress-nginx

    NAME                                            READY   STATUS      RESTARTS   AGE
    pod/ingress-nginx-admission-create-b8smg        0/1     Completed   0          8m21s
    pod/ingress-nginx-admission-patch-6nbjb         0/1     Completed   1          8m21s
    pod/ingress-nginx-controller-78f6c57f64-m89n8   1/1     Running     0          8m31s
    
    NAME                                         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
    service/ingress-nginx-controller             NodePort    10.107.152.204   <none>        80:32367/TCP,443:31480/TCP   8m31s
    service/ingress-nginx-controller-admission   ClusterIP   10.110.191.169   <none>        443/TCP                      8m31s
    
    NAME                                       READY   UP-TO-DATE   AVAILABLE   AGE
    deployment.apps/ingress-nginx-controller   1/1     1            1           8m31s
    
    NAME                                                  DESIRED   CURRENT   READY   AGE
    replicaset.apps/ingress-nginx-controller-78f6c57f64   1         1         1       8m31s
    
    NAME                                       COMPLETIONS   DURATION   AGE
    job.batch/ingress-nginx-admission-create   1/1           2s         8m31s
    job.batch/ingress-nginx-admission-patch    1/1           3s         8m31s
    

    However, when trying to apply a custom Ingress, I get the following error:

    Error from server (InternalError): error when creating "yaml/xxx/xxx-ingress.yaml": Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post https://ingress-nginx-controller-admission.ingress-nginx.svc:443/extensions/v1beta1/ingresses?timeout=30s: Temporary Redirect

    Any idea what could be wrong?

    I suspected DNS, but other NodePort services are working as expected and DNS works within the cluster.

    The only thing I can see is that I don't have a default-http-backend which is mentioned in the docs here. However, this seems normal in my case, according to this thread.

    Last but not least, I tried as well the installation with manifests (after removing ingress-nginx namespace from previous installation) and the installation via Helm chart. It has the same result.

    I'm pretty much a beginner on k8s and this is my playground-cluster. So I'm open to alternative solutions as well, as long as I don't need to set up the whole cluster from scratch.

    Update: With "applying custom Ingress", I mean: kubectl apply -f <myIngress.yaml>

    Content of myIngress.yaml

    apiVersion: networking.k8s.io/v1beta1
    kind: Ingress
    metadata:
      name: my-ingress
      annotations:
        nginx.ingress.kubernetes.io/rewrite-target: /
    spec:
      rules:
      - http:
          paths:
          - path: /someroute/fittingmyneeds
            pathType: Prefix
            backend:
              serviceName: some-service
              servicePort: 5000
    
  • Fernando Correia
    Fernando Correia over 3 years
    kubectl delete -A ValidatingWebhookConfiguration ingress-nginx-admission resolved it for me on minikube 1.12 with k8s 1.18.
  • Chris Halcrow
    Chris Halcrow over 3 years
    @Patrick Gardella this seems to be the practical solution for many people, as demonstrated by the many upvotes on your related post stackoverflow.com/a/62044090/1549918. I'm not even sure it's only a workaround,.
  • Lucas Cimon
    Lucas Cimon over 3 years
    In order to know what version of admissionregistration.k8s.io is compatible with your setup, use kubectl api-versions | grep admissionregistration
  • Abdenour Keddar
    Abdenour Keddar about 3 years
    Thank you for your great explanation! I encountered this problem with our private prod GKE cluster. I should notice that i only added the port 8443 to make it work.
  • Brett
    Brett about 3 years
    this is great, has saved me after way too long trying to troubleshoot. Thanks Oleg.
  • jarvo69
    jarvo69 about 3 years
    any solution without deleting ValidatingWebhookConfiguration?
  • Joao M
    Joao M over 2 years
    Instead of deleting the Admission Webhook a more practical solution is to allow in the firewall all nodes to communicate with port 8443. kubernetes.github.io/ingress-nginx/deploy - "In case Network policies or additional firewalls, please allow access to port 8443."
  • Joao M
    Joao M over 2 years
    Same for me. In my case, I had to open a security group on my custom install at AWS.
  • petermicuch
    petermicuch over 2 years
    And BTW - I would not disable validation webhook for ingress resources. They are there for reason and can prevent your controller going completely down by someone applying corrupted ingress (not in terms of syntax, but runtime issues). Then all applications behind this ingress controller will become unavailable.
  • PhotonTamer
    PhotonTamer over 2 years
    Indeed, I am behind a proxy. Your solutions seems clean and works. Thank you!
  • davidfm
    davidfm over 2 years
    That seems to have solved my problem mate. Thanks!
  • otherguy
    otherguy over 2 years
    Super helpful, thanks! I also only added port 8443 and it worked. You might want to add some info about what source to select when creating the firewall rule (I used the entire block I also specified in --master-ipv4-cidr= when creating the cluster).
  • meh
    meh about 2 years
    I really hate upvoting this. But it worked.
  • Pleymor
    Pleymor about 2 years
    Thanks! I my case, ingress-nginx-admission had another name => nginx-ingress-nginx-admission. Got it with kubectl get validatingwebhookconfigurations
  • David Wesby
    David Wesby almost 2 years
    Oleg, in "or v1.9 (to use admissionregistration.k8s.io/v1beta1)", did you mean to write v1.19? v1.24 seems to be the newest K8s version as I'm writing this.