configuring flannel to use a non default interface in kubernetes

11,955

Solution 1

I've the same problem, trying to use k8s and Vagrant. I've found this note in the documentation of flannel:

Vagrant typically assigns two interfaces to all VMs. The first, for which all hosts are assigned the IP address 10.0.2.15, is for external traffic that gets NATed.

This may lead to problems with flannel. By default, flannel selects the first interface on a host. This leads to all hosts thinking they have the same public IP address. To prevent this issue, pass the --iface eth1 flag to flannel so that the second interface is chosen.

So I look for it in the flannel's pod configuration. If you download the kube-flannel.yml file, you should look at DaemonSet spec, specifically at the "kube-flannel" container. There, you should add the required "--iface=enp0s8" argument (Don't forget the "="). Part of the code I've used.

  containers:
  - name: kube-flannel
    image: quay.io/coreos/flannel:v0.10.0-amd64
    command:
    - /opt/bin/flanneld
    args:
    - --ip-masq
    - --kube-subnet-mgr
    - --iface=enp0s8

Then run kubectl apply -f kube-flannel.yml

Hope helps.

Solution 2

don't know directly running "kubectl apply -f kube-flannel.yml" does not work at my side, it still show using interface with name eth0.

after running kubectl delete -f kube-flannel.yml then kubectl apply -f kube-flannel.yml, it shows using the interface with eth1:

I1122 11:31:44.405982       1 main.go:488] Using interface with name eth1 and address 192.168.0.24
I1122 11:31:44.406153       1 main.go:505] Defaulting external address to interface address (192.168.0.24)
I1122 11:31:44.428414       1 kube.go:131] Waiting 10m0s for node controller to sync
I1122 11:31:44.428552       1 kube.go:294] Starting kube subnet manager
I1122 11:31:45.429349       1 kube.go:138] Node controller sync successful
Share:
11,955
clvx
Author by

clvx

Updated on June 13, 2022

Comments

  • clvx
    clvx almost 2 years

    Is there a way to define in which interface Flannel should be listening? According to his documentation adding FLANNEL_OPTIONS="--iface=enp0s8" in /etc/sysconfig/flanneld should work, but it isn't.

    My master node configuration is running in a xenial(ubuntu 16.04) vagrant:

    $ sudo kubeadm init --pod-network-cidr 10.244.0.0/16 --apiserver-advertise-address 10.0.0.10 
    
    $ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
    clusterrole "flannel" created                                                                    
    clusterrolebinding "flannel" created                                                                   
    serviceaccount "flannel" created                                                                 
    configmap "kube-flannel-cfg" created                                                                                                                                                       
    daemonset "kube-flannel-ds" created   
    
    
    ubuntu@master:~$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml
    clusterrole "flannel" configured                                                          
    clusterrolebinding "flannel" configured         
    

    Host ip addresses:

    $ ip addr                      
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1     
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00                                    
        inet 127.0.0.1/8 scope host lo            
           valid_lft forever preferred_lft forever                                               
        inet6 ::1/128 scope host                  
           valid_lft forever preferred_lft forever                                               
    2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000                                                                                    
        link/ether 02:63:8e:2c:ef:cd brd ff:ff:ff:ff:ff:ff                                       
        inet 10.0.2.15/24 brd 10.0.2.255 scope global enp0s3                                     
           valid_lft forever preferred_lft forever                                               
        inet6 fe80::63:8eff:fe2c:efcd/64 scope link                                              
           valid_lft forever preferred_lft forever                                               
    3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000                                                                                    
        link/ether 08:00:27:fb:ad:bb brd ff:ff:ff:ff:ff:ff                                       
        inet 10.0.0.10/24 brd 10.0.0.255 scope global enp0s8                                     
           valid_lft forever preferred_lft forever                                               
    4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default                                                                                            
        link/ether 02:42:da:aa:6e:13 brd ff:ff:ff:ff:ff:ff                                       
        inet 172.17.0.1/16 scope global docker0   
           valid_lft forever preferred_lft forever                                               
    5: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default                                                                                         
        link/ether 5e:07:a1:7f:97:53 brd ff:ff:ff:ff:ff:ff                                       
        inet 10.244.0.0/32 scope global flannel.1 
           valid_lft forever preferred_lft forever                                               
        inet6 fe80::5c07:a1ff:fe7f:9753/64 scope link                                            
           valid_lft forever preferred_lft forever                                               
    6: cni0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000                                                                                     
        link/ether 0a:58:0a:f4:00:01 brd ff:ff:ff:ff:ff:ff                                       
        inet 10.244.0.1/24 scope global cni0      
           valid_lft forever preferred_lft forever                                               
        inet6 fe80::7037:fcff:fe41:b1fb/64 scope link                                            
           valid_lft forever preferred_lft forever                  
    

    Pods names:

    $ kubectl get pods --all-namespaces                                                                                                                                         
    NAMESPACE     NAME                             READY     STATUS              RESTARTS   AGE  
    kube-system   etcd-master                      1/1       Running             0          2m   
    kube-system   kube-apiserver-master            1/1       Running             0          2m   
    kube-system   kube-controller-manager-master   1/1       Running             0          2m   
    kube-system   kube-dns-545bc4bfd4-gjjth        0/3       ContainerCreating   0          3m   
    kube-system   kube-flannel-ds-gdz8f            1/1       Running             0          1m   
    kube-system   kube-flannel-ds-h4fd2            1/1       Running             0          33s  
    kube-system   kube-flannel-ds-rnlsz            1/1       Running             1          33s  
    kube-system   kube-proxy-d4wv9                 1/1       Running             0          33s  
    kube-system   kube-proxy-fdkqn                 1/1       Running             0          3m   
    kube-system   kube-proxy-kj7tn                 1/1       Running             0          33s  
    kube-system   kube-scheduler-master            1/1       Running             0          2m   
    

    Flannel Logs:

    $ kubectl logs -n kube-system kube-flannel-ds-gdz8f -c kube-flannel
    I1216 12:00:35.817207       1 main.go:474] Determining IP address of default interface
    I1216 12:00:35.822082       1 main.go:487] Using interface with name enp0s3 and address 10.0.2.15
    I1216 12:00:35.822335       1 main.go:504] Defaulting external address to interface address (10.0.2.15)
    I1216 12:00:35.909906       1 kube.go:130] Waiting 10m0s for node controller to sync
    I1216 12:00:35.909950       1 kube.go:283] Starting kube subnet manager
    I1216 12:00:36.987719       1 kube.go:137] Node controller sync successful
    I1216 12:00:37.087300       1 main.go:234] Created subnet manager: Kubernetes Subnet Manager - master
    I1216 12:00:37.087433       1 main.go:237] Installing signal handlers
    I1216 12:00:37.088836       1 main.go:352] Found network config - Backend type: vxlan
    I1216 12:00:37.089018       1 vxlan.go:119] VXLAN config: VNI=1 Port=0 GBP=false DirectRouting=false
    I1216 12:00:37.295988       1 main.go:299] Wrote subnet file to /run/flannel/subnet.env
    I1216 12:00:37.296025       1 main.go:303] Running backend.
    I1216 12:00:37.296048       1 main.go:321] Waiting for all goroutines to exit
    I1216 12:00:37.296084       1 vxlan_network.go:56] watching for new subnet leases
    

    How do I do to configure flannel in kubernetes to listen in enp0s8 instead of enp0s3?

  • CTodea
    CTodea almost 6 years
    I was suspecting that the problem was around the iface flannel was using for the vxlan, but was trying to solve it changing the routing table. Your solution worked like a charm.
  • jonashackt
    jonashackt almost 6 years
    And also be sure to add the following to kubelet.service: --node-ip={{ hostvars[inventory_hostname]['ansible_enp0s8']['ipv4']['addr‌​ess'] }} , this way the correct host IPs will be matching the Pod CIDRs
  • Hao
    Hao over 5 years
    my iface is eth0
  • mlazarov
    mlazarov over 5 years
    Hey lalo, your solution will work only if all nodes are the same! If the interface on some of the nodes isn't enp0s8 this won't work.
  • Bathz
    Bathz about 5 years
    Thanks for your comment, @Libo-zhu, we experienced exactly the same behaviour.
  • Alex G
    Alex G almost 5 years
    --iface-regex=10\.0\.*\.* is the answer
  • tftd
    tftd over 2 years
    @AlexG thank you sir for this wonderful hint! This was EXACTLY what I was looking for. In my case we were building a hybrid cluster and some nodes where in different subnet. The iface-regex idea works absolutely brilliant! :)