Bonding 2 or more Gigabyte NIC together to get 2Gbps performance between 1 server and 1 client?

11,267

Solution 1

I've setup a lab with 2 servers each one with 2 Gbit NICs connected back-to-back by 2 CAT5e cables. Using Debian 5.0.5 freshly installed on both servers I configured a bonding-master interface bond0 with eth0 and eth1 on both machines using bond-mode 0 (balance-rr) since there's no need to have anything more complex than this really.

The configs (/etc/network/interfaces) look somewhat like this:

iface bond0 inet static
    address 192.168.1.1
    netmask 255.255.255.0
    slaves eth0 eth1
    bond_mode balance-rr
    bond_miimon 100
    bond_downdelay 200
    bond_updelay 200

I installed Apache on one of the servers and downloaded a file from that Apache on the other machine. I was not able to achieve any speed > 1Gbit/s but my guess is that was because of I/O bottlenecks. I can, however, see traffic flowing on both physical interfaces so I'd say what you want is possible.

Let me know how it turns out then :)

Hope this helps!

Solution 2

This can be done with most NIC's but you also need a Switch that supports this. Most managed switches can do this just fine but unmanaged switches won't be able to do this very well.

Make sure your servers can handle the bandiwdth before spending money, a single cheap hard drive won't be able to handle 2Gbps for the most part. a Nice big fat disk array is a different matter though.

Solution 3

It's certainly possible to do this with a switch, not sure about doing it directly between computers because I've never tried.

As for whether or not it is worth it, that will depend on the quality of the NICs used and the speed of the internal bus they are plugged into, and as noted in Luma's reply, the speed of the disks being used. It really is a case of try it and see, I'm afraid.

Share:
11,267

Related videos on Youtube

Pharaun
Author by

Pharaun

Not much to say :) except for: 1) big linux fan 2) uses vim for life! Also currently a big fan of Ruby myself.

Updated on September 17, 2022

Comments

  • Pharaun
    Pharaun almost 2 years

    I have not gotten the server or the NIC yet, but here is the target setup:

    1. 1x Server
    2. 1x Client
    3. 1 or more NIC linked point to point between the Server & Client (No switch involved)

    So I am wondering if it is possible to setup some form of bonding with an 2 Nic or 4 Nic pro Intel ethernet card on the PCI-X/PCI-E bus in a way that would enable the Client & Server to be able to share files faster than the 1Gbps cap?

    I am aware that there will be some overhead from TCP/UDP, and other overheads from other stuff, but I want to attempt to provide the client & server with the highest possible aggregate bandwidth between both of them.

    If this is not possible then I will refrain from incurring additional expense with a 2x Ethernet NIC or 4x Ethernet NIC.

    • Pharaun
      Pharaun almost 14 years
      I would like to add an additional question, why would a Switch be required? Can't you do it point to point by plugging the cable from each Ethernet port into the other?
    • Khai
      Khai almost 14 years
      I'd have to test with back-to-back connected servers but I'd say it IS possible by bonding the interfaces on both sides. However you might need crossover cables if your NIC doesn't auto-crossover.
    • Pharaun
      Pharaun almost 14 years
      I want to keep this simple and not have to worry about a Switch and etc, just do it point to point and be done with it.
    • Luma
      Luma almost 14 years
      to use proper bonding protocols (LACP for example) a switch is required. The Switch makes the 2+ cables seem like 1 big fat cable. There are different protocols for this. I have done this with 3com and Dell managed Switches. You would set the bond using the network card software on both boxes (one ip, multiple network cards) and then set the switch protocol (the Nic software would use this same protocol) and voila done. it is Not that dificult, sounds worse then it is.
    • Pharaun
      Pharaun almost 14 years
      @Luma is there any way to do it without a switch, because a fancy switch that can handle that is probably expensive.
    • Khai
      Khai almost 14 years
      @Luma afaik to use 802.3ad you do NOT require a switch. But you can prove me wrong with a link to some reference :) I don't see any other reason to need some more complex protocol other than wanting the two cables to come from different switches.
    • Khai
      Khai almost 14 years
      @Pharaun if I have the time I'll setup a lab tomorrow to test this.
    • Pharaun
      Pharaun almost 14 years
      @Khai, the 2+ Cables in a point to point link between two machines right? That would be great if you could test it!
  • Rob Moir
    Rob Moir almost 14 years
    +1 for the disks - forgot to mention that originally in my reply.
  • Pharaun
    Pharaun almost 14 years
    I have a 6x disk in RAID on the server (Client will get raided disks also), and I've benched it at about 158Mb/Sec Read at the worst, and up to 367Mb/Sec so that would be ~1.2Gbps to 2.9Gbps. So it looks like 2-3x Ethernet should be adequate not according for the overhead.
  • Pharaun
    Pharaun almost 14 years
    I am looking into getting Intel Pro Gigabyte Ethernet, PCI-E on the server, and probably PCI-E on the client also, could go PCI-X on the client, depending on the motherboard.
  • Rob Moir
    Rob Moir almost 14 years
    PCI-E would be better where it's available in my experience, providing its a wide lane slot and card. The big question is whether or not you require a switch. I'm inclined to think that you do, but I don't know for sure.
  • Antoine Benkemoun
    Antoine Benkemoun almost 14 years
    +1 for real testing :)
  • Pharaun
    Pharaun almost 14 years
    It looks like I will go ahead and go for a 2x port NIC card for now to save a little bit of money if in the end it does not work out, but this looks promising :) I was able to dig up some more information: linuxfoundation.org/collaborate/workgroups/networking/bondin‌​g It seems like the balance-rr would be the best option. It looks like you've proven that it is workable, it should be down to just tweaking the kernel/etc to get it to work smoothly. Thanks!