Linux bond mode 4 (802.3ad) - 2 switch - 4 NIC

13,323

Solution 1

You can actually configure an LACP bond to two separate switches.

Say you have the following:

+------+     +-----+
| eth0 =-----= sw1 |
| eth1 =-----=     |
|      |     +-----+
|      |     +-----+
| eth2 =-----= sw2 |
| eth3 =-----=     |
+------+     +-----+

With all ethX interfaces in bond0, and each switch with a separate active LACP port-channel.

The bond will work fine, and will recognize two different Aggregator IDs, however only one Aggregator be can active at once so only one switch will be used at any time.

So the bond comes up and has two Aggregators, one to sw1 and one to sw2. The first Aggregator is active by default, so all traffic will be between eth0/eth1 and sw1. eth2/eth3 and sw2 remain as idle standby.

Say sw1's port 1 failed, so the Aggregator to sw1 only has one port active. sw1 will continue to be the active Aggregator. However, you can make it fail over to sw2 with the ad_select=bandwidth (whichever Agg has most bandwidth) or ad_select=count (whichever Agg has most slaves) bonding module parameter.

Say sw1 failed altogether, then that Aggregator will go down, and sw2 will take over.

Solution 2

I just finished configuring exactly the same setup on Ubuntu server 14.04 LTS.
Procedure should be identical for any Linux distro who configures networking through the interfaces file. (E.g. Debian and most of it's derivatives like Ubuntu and Mint.)

On each switch:
Configure both ports in a 802.3ad ether-channel. There is no need for a channel definition linking both switches. The channels should be defined on each switch individually.

On the server:
First install package "ifenslave-2.6" through your package manager.
Then edit /etc/modules and add an extra line with the word "bonding" to it.
E.g.:

# /etc/modules: kernel modules to load at boot time
loop
lp
rtc
bonding

Run "modprobe bonding" once to load the bonding module right now.
Then edit /etc/network/interfaces to define the real NIC's as manual interfaces being slaves of the new interface "bond0".
E.g.:

# The loopback interface
auto lo
iface lo inet loopback

# The individual interfaces
auto eth0
iface eth0 inet manual
bond-master bond0

auto eth1
iface eth1 inet manual
bond-master bond0

auto eth2
iface eth2 inet manual
bond-master bond0

auto eth3
iface eth3 inet manual
bond-master bond0

# The bond interface
auto bond0
iface bond0 inet static
address 192.168.1.200
gateway 192.168.1.1
netmask 255.255.255.0
bond-mode 4
bond-miimon 100
bond-slaves eth0 eth1 eth2 eth3
bond-ad_select bandwidth

The last statement insures that whichever of the 2 pairs has full connectivity gets all traffic when just 1 interface goes down.
So if eth0 and eth1 connect to switch A and eth2-eth3 go to switch B the connection will use switch B if either eth0 or eth1 goes down.

Last but not least:

ifup eth0 & ifup eth1 & ifup eth2 & ifup eth3 & ifup bond0

That's it. It works and will automatically come back online after a reboot.
You can observe failover behavior by bringing down individual ethX interfaces with ifdown and observe the resulting aggregated bandwidth through "ethtool bond0".
(No need to go to the server-room and yank cables.)

Share:
13,323

Related videos on Youtube

rnooooo
Author by

rnooooo

Updated on September 18, 2022

Comments

  • rnooooo
    rnooooo almost 2 years

    I know that you can use bonding mode 4 with 1 servers with 2 nic using 2 switch.

    Bond 0 made of : Nic 1 port 1 -> switch A Nic 2 port 1 -> switch B

    In this case I can loose a switch or a nic or a cable and still have my network working, if everything is working I will have link aggregation on the top of high availability .

    My question is can you do the same but with 4 NIC to have more speed and still play it safe.

    Bond 0 made of : Nic 1 port 1 -> switch A Nic 1 port 2 -> switch B Nic 2 port 1 -> switch A Nic 2 port 2 -> switch B

    The switch will probably be CISCO.

    Cheers

    • Chopper3
      Chopper3 about 11 years
      You'll need VSS-capable switches from Cisco to do this.
  • rnooooo
    rnooooo about 12 years
    2 issues with your setup, 2 bond and if switch 1 goes down, I will lose bond0. I will try to find out if CISCO support 802.3ad share (they must do) otherwise I will aim for mode 6. thx
  • rnooooo
    rnooooo about 12 years
  • GioMac
    GioMac almost 11 years
    Note, 802.3ad will provide higher (total) speed for multiple connections. Single connection will be still going through one interface only and have single link bandwidth.
  • suprjami
    suprjami almost 10 years
    An LACP bond does not need to be connected to the one switch, even if the two switches do not share LACP information through stack or VPC. This is one of the great strengths of LACP. See my answer for more detail.
  • Aaron R.
    Aaron R. over 9 years
    Great answer! I would have loved an /etc/modprobe.conf example though. I'll post one if I get it working.
  • suprjami
    suprjami over 9 years
    options bonding miimon=100 mode=4 ad_select=bandwidth though I'm primarily a RHEL/CentOS guy, where the correct place to configure bonding is in /etc/sysconfig/network-scripts/ifcfg-bondX using BONDING_OPTS="miimon=100 mode=4 ad_select=bandwidth"
  • Tonny
    Tonny over 9 years
    @jamieb I'm looking at a same setup on a Ubuntu Server 14.04 LTS. Any idea how exactly to configure this with Upstart ?
  • Tonny
    Tonny over 9 years
    @jamieb Got it working. It's very easy actually. Will post full answer with working config just in case someone else stumbles across this.
  • gxx
    gxx over 4 years
    Thanks for this answer -- I plan to give this a shot. Might become production, soon. :) Just one question: Do I get this right, that in this scenario, inter-switch-links are not necessary?