Freebsd or Linux? as BGP router over 100mbps

9,427

Solution 1

We've done exactly this for critical infrastructure for many years. We take three full upstream BGP feeds through Quagga's bgpd and it uses a whopping 658MB of RAM to run the whole system. For this purpose Debian is much more solid than other OSs in our experience (and it also needs less security updates with its minimum install footprint, causing much fewer reboots than the two other OSs we've tried). We use Ksplice so we only boot for critical package updates. Don't worry at all about compatibility with other vendors at your ISP ... RIPE the RIR use Quagga !

Surprisingly the hardware isn't that important, it's all about the NICs. Fast CPUs basically just mean the prefixes load quicker if you refresh the sessions (assuming you've got a GB of RAM and they load into memory) so an entry-level Quad Core is massively over-specced. We spent a long time trying different NICs and in our experience the best are the Intel cards which use the igb driver (for about £100/NIC we use the: 82576, ET Dual Port Server Adapter) with the e1000 coming second. There are a few considerations like how your ingress and egress NICs talk to the mainboard but for sub 250Mbps you probably won't notice if you use these NICs. We've repelled a sophisticated UDP DDoS attack using this architecture (it used the tiniest UDP packets which routers struggle to handle). Bear in mind being able to process the highest number of packets is what you're most concerned with and not necessarily the throughput, measured in Mbps. For very little money we've specified a Gigabit multihomed router that can handle standard Internet size packets, ie normal operation, up to 850Mbps !

I started with Cisco (bgpd's config is near-enough identical so if you've got experience with Cisco kit it's a really quick transition) but because Linux is so malleable (eg being able to add a few low-resource scripts to your routers to help with reporting and admin) IMHO makes it incredibly powerful (and underrated) for this type of set up. You can't go far wrong reading some of the Nanog Mailing list archives if you're still in any doubt or need further help.

This should get you started pretty quickly on Debian: Easy Quagga Tutorial

Solution 2

They're both capable platforms. Run something solid like Debian or Centos, on good server grade hardware. Make sure you specify servers with Intel Server NICs, they're much better than Broadcomm for stability.

As far as BSD vs Linux, it's easy.. Choose whichever you are most competent with.

Solution 3

I've seen old Celerons handling 80-90Mb/s of normal traffic on a Debian/Quagga setup with 3 full feeds without even breaking a sweat. However, the qualifier there is "normal" traffic, mainly HTTP/SMTP and DNS. The same machines have fallen flat on their face during DDOS situations where the Packets Per Second went to ridiculous numbers of mainly UDP packets.

It's normally not the bandwidth you normally need to be worried about, but the PPS you will be handling.

Unfortunately, I can't help you on the Linux VS BSD for routing performance part of the question, but it shouldn't make any difference on current commodity hardware for a few 100Mb connections.

Share:
9,427

Related videos on Youtube

Admin
Author by

Admin

Updated on September 17, 2022

Comments

  • Admin
    Admin over 1 year

    I am building a server to act as a BGP border router for my 100mbps uplink in ISP.

    I needs these feature:

    1) Dual stack BGP peering/routing (at least 100Mbps, maybe more). 2) Potential full internet BGP feed. 3) Some basic ACL functionality.

    The hardware is L3426/8G ram. NIC will be on-board dual port Broadcom 5716.

    I've worked with Linux extensively before and it seems to be able to handle 100mbps, but I heard FreeBSD is faster on networking stuff. Which one should I use? And do we have some performance benchmark numbers out there?

    Cheers.

    • user3789902
      user3789902 about 13 years
      any reason why your not using a cisco bgp router? Unfortunately most isp who let customers run bgp specify this requirement for 'compatibility'
    • Niall Donegan
      Niall Donegan about 13 years
      Em, first time I've heard of that restriction, and I work on a network that started with Quagga/Debian on Dell PowerEdge, up to Juniper and Cisco kit now. Also dealing with a LOT of different transit providers and exchanges. If an ISP is putting such a restriction in place, replace them with someone competent.
    • Philip
      Philip about 13 years
      Side note, since it's a router I would highly suggest putting a NIC card in there as a backup. If the onboard one goes bad, you're replacing the mobo instead of swapping out a quick PCIe card.
    • TomTom
      TomTom about 13 years
      You are wasting money. A cheap box from Mikrotik (RougerBoard 1100AH for example) could handle this for a lower price and is linux based.
    • ollybee
      ollybee about 13 years
      Several people have suggested using a dedicated nic and not the on-board Broadcom ones. The Serverfault blog has a couple of interesting posts on this.
  • poige
    poige about 13 years
    "routing performance" is not. It's "forwarding performance", actually.
  • Philip
    Philip about 13 years
    +1, FreeBSD can usually inch out Linux in benchmarks, but the difference (if there is any) is so small that you should simply pick the platform your most comfortable with.
  • Niall Donegan
    Niall Donegan about 13 years
    Fair cop, guv! :)
  • Jonathan Ross
    Jonathan Ross about 13 years
    The other benefit of running Linux is you can easily shape your traffic with tc after tc's initial learning curve. A word of warning however is that running IPtables on your forwarding box significantly reduces kernel performance during attacks from what we've seen.
  • Joris
    Joris about 13 years
    I'd love to hear more on the nic <-> motherboard issue. Also, how many pps are you succesfully able to handle?
  • Jonathan Ross
    Jonathan Ross about 13 years
    On our average packet size (HTTP, SMTP, DNS mostly) we should manage duplexed 850Mbps. The DDoS was 120,000 pps of 64 byte UDP packets. The effect was neglible on performance but we weren't pushing that much traffic when it hit.
  • Jonathan Ross
    Jonathan Ross about 13 years
    We opted for a motherboard with two unconnected fast PCIe slots so the buffers don't bottleneck. I forget the terminology because it's a while since we bought the hardware. One for egress, one for ingress. Fairly standard these days.