802.3ad bonding configuration file on an Ubuntu 16.04 LTS Server

24,935

Solution 1

I have a working setup running on 16.04 (linux 4.4.0-22) that is very similar.

Apart from LACP rate and 1G (eno1+) vs 10G SFP+ (eno49+) the biggest difference seems to be the use of auto bond0.

# /etc/modprobe.d/bonding.conf
alias bond0 bonding
    options bonding mode=4 miimon=100 lacp_rate=1

Some of these options may be redundant.

# /etc/network/interfaces
auto eno49
iface eno49 inet manual
    bond-master bond0

auto eno50
iface eno50 inet manual
    bond-master bond0

auto bond0
iface bond0 inet static
    address 10.0.0.1
    netmask 255.255.255.0
    bond-slaves eno49 eno50
    bond-mode 4
    bond-miimon 100
    bond-lacp-rate 1

Not seeing any stalls during boot. Doing a systemctl restart networking yields a short wait of a few seconds, but nothing more.

$ systemd-analyze
Startup finished in 2.344s (kernel) + 1.658s (userspace) = 4.002s

Solution 2

You must allow system to bring up bond interface even when slave ports are not ready to get it configured all the time, "bond-slaves none" does that. So right configuration example:

allow-hotplug eno1
iface eno1 inet manual
    bond-master bond0

allow-hotplug eno2
iface eno2 inet manual
    bond-master bond0

auto bond0
iface bond0 inet manual
    bond-mode 802.3ad
    bond-miimon 100
    bond-lacp-rate fast
    bond-slaves none
    bond-xmit_hash_policy layer2+3

Solution 3

I too have a working bonding setup on 16.04, and my setup works well on Ubuntu since 12.04, unchanged.

My solution is pretty much the same as that from @timss but I did never need to mess with /etc/modprobe.d/bonding.conf and there are a few details that I found necessary over time which I included below and will comment at the end.

Below, I have interfaces eth2-eth5 bonded on bond0

auto eth2
iface eth2 inet manual
        bond-master bond0

auto eth3
iface eth3 inet manual
        bond-master bond0

auto eth4
iface eth4 inet manual
        bond-master bond0

auto eth5
iface eth5 inet manual
        bond-master bond0

auto bond0
iface bond0 inet manual
        hwaddress ether 00:00:00:00:00:00 <= ADD MAC of one of the bonded interfaces here
        bond-slaves eth2 eth3 eth4 eth5
        bond-miimon 100
        bond-mode 802.3ad
        bond-lacp-rate 1
        xmit_hash_policy layer3+4

Comments:

  1. "hwaddress ether ":I noticed that when you bond your interfaces, the MAC address of the bonded interface will be equal to the MAC address of one of the interfaces being bonded, but it may change everytime the system is restarted. I find it useful for servers to have a known MAC address, so here I have it set to the MAC of one of the interfaces in a way that will be permanent.
  2. "xmit_hash_policy": read the docs about this option, it can have very significant impact on the performance of your bonded interface.
Share:
24,935

Related videos on Youtube

To마SE
Author by

To마SE

Updated on September 18, 2022

Comments

  • To마SE
    To마SE almost 2 years

    If I use a manual setup on the command line (following the kernel instructions), I can properly setup my network connection:

    # modprobe bonding mode=4 miimon=100
    # ifconfig bond0 up
    # ip link set eno1 master bond0
    # ip link set eno2 master bond0
    

    For the record, the switch used is a Cisco Nexus 2248, and I do not specify an IP address because there's an additional 802.1q layer (whose presence or absence in the configuration file has no impact on the problem).

    The problem is that I'm unable to create a correct /etc/network/interfaces file to have this done automatically at boot time. There is a lot of confusion online between the different versions of the ifenslave package, notably its documentation, and on how to avoid race conditions when using ifup. Whatever worked with the previous versions of Ubuntu does not anymore. And I wouldn't be surprised if systemd were making things even more messy. Basically, whatever I try, my scripts get stuck at boot time and I have to wait either one or five minutes before the boot process completes.

    This is the best that I could achieve:

    auto lo
    iface lo inet loopback
    
    allow-bond0 eno1
    iface eno1 inet manual
           bond-master bond0
    
    allow-bond0 eno2
    iface eno2 inet manual
           bond-master bond0
    
    auto bond0
    iface bond0 inet manual
           bond-mode 4
           bond-slaves eno1 eno2
           bond-miimon 100
    

    At boot time bringing up bond0 stalls for one minute (because bond0 is waiting for at least one of its slaves to be brought up, that never happens, so it times out), but then once the system is booted, using ifup eno1 works and bond0 starts working properly.

    If I specify auto eno1, then the boot process stalls for five minutes, bond0 is never brought up properly, and trying to use ifdown eno1 will get stuck because it's waiting for some lock in /run/network/wherever (can't remember the exact file, and have rebooted this machine often enough already), which seems to indicate that yes, I ran into a race condition and ifup is stuck forever with eno1.

    Does anyone have a working solution on the latest Ubuntu?

  • To마SE
    To마SE about 8 years
    I just did a fresh install on another server and it works, when I have spare time I'll try to figure out exactly what is wrong with the first machine. Besides, I didn't need to setup /etc/modprobe.d/bonding.conf, I think this was necessary only back when the ifup script didn't understand the bond- parameters
  • timss
    timss about 8 years
    @To마SE Ok, that's good to know.