nginx ingress controller forward source ip

10,281

Solution 1

If you've installed nginx-ingress with the Helm chart, you can simply configure your values.yaml file with controller.service.externalTrafficPolicy: Local, which I believe will apply to all of your Services. Otherwise, you can configure specific Services with service.spec.externalTrafficPolicy: Local to achieve the same effect on those specific Services.

Here are some resources to further your understanding:

Solution 2

It sounds like you have your Nginx Ingress Controller behind a NodePort (or LoadBalancer) Service, or rather behind a kube-proxy. Generally to get your controller to see the raw connecting IP you will need to deploy it using a hostNetwork port so it listens directly to incoming traffic.

Share:
10,281
bramvdk
Author by

bramvdk

Updated on June 27, 2022

Comments

  • bramvdk
    bramvdk almost 2 years

    I have setup an ingress for an application but want to whitelist my ip address. So I created this Ingress:

    apiVersion: extensions/v1beta1
    kind: Ingress
    metadata:
      annotations:
        cert-manager.io/cluster-issuer: letsencrypt
        kubernetes.io/ingress.class: nginx
        nginx.ingress.kubernetes.io/whitelist-source-range: ${MY_IP}/32
      name: ${INGRESS_NAME}
    spec:
      rules:
      - host: ${DNS_NAME}
        http:
          paths:
          - backend:
              serviceName: ${SVC_NAME}
              servicePort: ${SVC_PORT}
      tls:
      - hosts:
        - ${DNS_NAME}
        secretName: tls-secret
    

    But when I try to access it I get a 403 forbidden and in the nginx logging I see a client ip but that is from one of the cluster nodes and not my home ip.

    I also created a configmap with this configuration:

    data:
      use-forwarded-headers: "true"
    

    In the nginx.conf in the container I can see that has been correctly passed on/ configured, but I still get a 403 forbidden with still only the client ip from cluster node.

    I am running on an AKS cluster and the nginx ingress controller is behind an Azure loadbalancer. The nginx ingress controller svc is exposed as type loadbalancer and locks in on the nodeport opened by the svc.

    Do I need to configure something else within Nginx?

  • bramvdk
    bramvdk about 4 years
    Hi, yeah sorry forgot to mention that. Edited the question. My nginx controller is exposed as type loadbalancer and is behind an azure loadbalancer indeed which had lb rules forwarding to the nodeports opened by the svc.
  • bramvdk
    bramvdk about 4 years
    Sorry tried to edit coderangers suggestion but wanted to edited my own. Want to add that Kube-proxy is used by default in AKS
  • Pramod Setlur
    Pramod Setlur about 3 years
    I feel the latter might not work. If we just enable it on the application's svc and not on the nginx-ingress's svc, the ip of the node would still get forwarded to the svc instead of the realip.
  • tuxErrante
    tuxErrante about 3 years
    Tried both the use-forwarded-headers in the configmap and the "externalTrafficPolicy" in the nginx ingress (quay ingress-controller 0.30) on an Oracle Cloud, neither worked for me
  • Phil
    Phil about 3 years
    Doing this on specific services didn't work for me. Setting the externalTrafficPolicy: Local on the nginx-ingress-controller SERVICE (not deployment or config) made everything magically work. Even ClusterIP services now get the correct headers.