Network latency: 100Mbit vs. 1Gbit

34,218

Solution 1

YES gbit has a lower latency, since:

  • the same number of bytes can be transfered in faster time

BUT the improvement is only appreciable if the packet(s) have a certain size:

  • 56 byte package => virtually no faster transfer
  • 1000 byte package => 20% faster transfer
  • 20000 byte package(s) => 80% faster transfer

So if you have an application which is very sensitive to latency (4ms vs. 0.8ms, round-trip for 20kb) and require larger packages to be transferred, than switching from 100mbit to gbit can give you a latency reduction, even though you use much less than the 100mbit/s in average (= the link is not saturated permanently).

Server (100mbit) -> Switch (gbit) -> Server (100mbit):

size: 56 :: rtt min/avg/max/mdev = 0.124/0.176/0.627/0.052 ms
size: 100 :: rtt min/avg/max/mdev = 0.131/0.380/1.165/0.073 ms
size: 300 :: rtt min/avg/max/mdev = 0.311/0.463/2.387/0.115 ms
size: 800 :: rtt min/avg/max/mdev = 0.511/0.665/1.012/0.055 ms
size: 1000 :: rtt min/avg/max/mdev = 0.560/0.747/1.393/0.058 ms
size: 1200 :: rtt min/avg/max/mdev = 0.640/0.830/2.478/0.104 ms
size: 1492 :: rtt min/avg/max/mdev = 0.717/0.782/1.514/0.055 ms
size: 1800 :: rtt min/avg/max/mdev = 0.831/0.953/1.363/0.055 ms
size: 5000 :: rtt min/avg/max/mdev = 1.352/1.458/2.269/0.073 ms
size: 20000 :: rtt min/avg/max/mdev = 3.856/3.974/5.058/0.123 ms

Server (gbit) -> Switch (gbit) -> Server (gbit):

size: 56 :: rtt min/avg/max/mdev = 0.073/0.144/0.267/0.038 ms
size: 100 :: rtt min/avg/max/mdev = 0.129/0.501/0.630/0.074 ms
size: 300 :: rtt min/avg/max/mdev = 0.185/0.514/0.650/0.072 ms
size: 800 :: rtt min/avg/max/mdev = 0.201/0.583/0.792/0.079 ms
size: 1000 :: rtt min/avg/max/mdev = 0.204/0.609/0.748/0.078 ms
size: 1200 :: rtt min/avg/max/mdev = 0.220/0.621/0.746/0.080 ms
size: 1492 :: rtt min/avg/max/mdev = 0.256/0.343/0.487/0.043 ms
size: 1800 :: rtt min/avg/max/mdev = 0.311/0.672/0.815/0.079 ms
size: 5000 :: rtt min/avg/max/mdev = 0.347/0.556/0.803/0.048 ms
size: 20000 :: rtt min/avg/max/mdev = 0.620/0.813/1.222/0.122 ms

= in average over multiple servers 80% latency reduction for 20kb ping

(If only one of the links is gbit, you will still have a 5% latency reduction for 20kb ping.)

Solution 2

The only way latency would drop appreciably is if the current 100Mbit link is saturated. If it is not saturated, you will likely not notice any change.

Additionally, your assumption that the 1Gbit link will be able to support larger packets is incorrect. Max packet size is determined by the MTU of the various devices along the path that the packet takes - starting with the NIC on your server, all the way through to the MTU of your customer's computer. In internal LAN applications (when you have control over all the devices along the path), it is sometimes possible to increase the MTU, but in this situation, you are pretty much stuck with the default MTU of 1500. If you send packets larger than that, they will end up getting fragmented, thereby actually decreasing performance.

Solution 3

You're looking at the world through a pinhole. A valid test of latency differences at different speeds would be between two identical NICs connected with a cross-connect cable. Set the NICs mathching speeds of 10mb, 100mb and 1000mb. This will show that there is virtually no difference in latency at the different speeds. All packets travel at the same wire speed regardless of max bandwidth being used. Once you add switches with store and forward caching everything changes. Testing latency through a switch must be done with only two connections to the switch. Any other traffic may affect the latency of your test. Even then the switch may roll-over logs, adjust packet type counters, update internal clock, etc.. Everything may affect latency.

Yes, switching from 100mb to 1gb might be faster (lower latency) due to hardware changes, different NIC, different switch, different driver. I have seen larger changes in ping latency from driver differences than any other changes; bandwidth, switches, offloading NICs, etc..

The switch would be the next biggest change with cut-through significantly faster than store and forward for single transmit tests. However, a well designed store and forward switch may overtake the cut-through switch in overall performance under high load. In the early days of gigabit I'v seen 10mb high performance backplane switches with lower latency than cheap gigabit switches.

Ping tests are practically irrelevant for performance analysis when using the Internet. They are quick tests to get a ballpark idea of what's happening on the transport at the moment of the test. Production performance testing is much more complicated than just a ping. High performance switches are computers and under high load behave differently - change in latency.

Having a slower NIC, or a NIC set to a slower speed, could actually help a server with concurrent bursts by throttling the input to the server using the switches cache. A single re-transmit may negate any decrease in latency. Usually medium to high-load traffic levels are important, not single ping tests. e.g. Old slow Sun Ultrasparc (higher latency for a single ping) outperforms new cheap gigabit desktop used as dev server when under 70% 100mb bandwidth load. Desktop has faster gb NIC, faster connection gb-gb, faster memory, more memory, faster disk and faster processor but it doesn't perform as well as tuned server class hardware/software. This is not to say that a current tuned server running gb-gb isn't faster than old hardware, even able to handle larger throughput loads. There is just more complexity to the question of "higher performance" than you seem to be asking.

Find out if your provider is using different switches for the 100mb vs. 1gb connections. If they use the same switch backplane than I would only pay for the increase if the traffic levels exceeded the lower bandwidth. Otherwise you may find that in short time many other users will switch over to the gigabit and the few users left on the old switch now have higher performance - lower latency, during high loads on the switch (overall switch load, not just to your servers).

Apples and oranges example: Local ISP provided a new switch for bundled services, DSL and phone. Initially users saw an increase in performance. System was oversold. Now users that remain on the old switch have higher consistent performance. During late night, users on the new system are faster. In the evening under high load the old switch clients clearly outperform the new overloaded system.

Lower latency doesn't always correlate to faster delivery. You mention MySQl in the 20 requests to serve a single page. That traffic shouldn't be on the same NIC as the page requests. Moving all internal traffic to an internal network will reduce collisions and total packet counts on the outgoing NIC and provide larger gains than the .04ms latency gain of a single packet. Reduce the number of requests per page to reduce page load latency. Compress the pages, html, css, javascript, images to decrease page load times. These three changes will give larger overall gains ongoing than paying for bandwidth not being used to get a .04ms latency reduction. The ping needs to run 24hrs and be averaged to see the real latency change. Smart switches now do adaptive RTSP type throttling with small initial bandwidth increases and large transfers throttled. Depending on your page sizes (graphics, large html/css/javascript) you may see initial connection latencies/bandwidth much lower/higher than a large page or full page transfers. If part of your page is streaming you may see drastically different performance between the page and the stream.

Solution 4

I think you have a fundamental misconception about bandwidth latency and "speed". Speed is a function of bandwidth and latency. For instance consider a shipment of data on DVDs shipped across the country taking 3 days to arrive. Compare that to sending the data across the internet. The internet has a much lower latency connection, but to match the "speed" of the connection to the shipment you'd have to have recieve at 9.6MB a sec (reference example from this source).

In your case upgrading to higher bandwidth would allow you to serve more concurrent users but not improve the latency to any individual user.

Solution 5

This depends on the type of switch you're connecting to. On some vendors (such as Crisco... I mean Cisco), ICMP packets will flow back to the CPU (gag).

You may find a better test would be to perform a host-to-host test using a 100Mbps and 1Gbps link (i.e. not a host-to-switch or host-to-router test).

At the end of the day, the latency is going to come down to the forwarding rate on the switch and the particulars of the switch's architecture (where the ASICs are placed on the board, how locking is handled between line cards, etc). Good luck with your testing.

Share:
34,218

Related videos on Youtube

Andreas Richter
Author by

Andreas Richter

Updated on September 18, 2022

Comments

  • Andreas Richter
    Andreas Richter almost 2 years

    I have a webserver with a current connection of 100Mbit and my provider offers an upgrade to 1Gbit. I understand that this refers to throughput but the larger the packets, the faster they can be transmitted as well, so I would expect a slight decrease in response time (e.g. ping). Did anybody ever benchmarked this?

    Example (100mbit to 100mbit server) with 30 byte load:

    > ping server -i0.05 -c200 -s30
    [...]
    200 packets transmitted, 200 received, 0% packet loss, time 9948ms
    rtt min/avg/max/mdev = 0.093/0.164/0.960/0.093 ms
    

    Example (100mbit to 100mbit server) with 300 byte load (which is below MTU):

    > ping server -i0.05 -c200 -s300
    [...]
    200 packets transmitted, 200 received, 0% packet loss, time 10037ms
    rtt min/avg/max/mdev = 0.235/0.395/0.841/0.078 ms
    

    So from 30 to 300 the avg. latency increaces from 0.164 to 0.395 - I would expect this to be a slower increase for a 1gibt to 1gbit connection. I even would expect 100mbit to 1gbit to be faster, if the connection is through a switch which first waits until it received the whole packet.

    Update: Please read the comments to the answers carefully! The connection is not saturated, and I don't think that this speed increase will matter for humans for one request, but it is about many requests which add up (Redis, Database, etc.).

    Regarding answer from @LatinSuD:

    > ping server -i0.05 -c200 -s1400
    200 packets transmitted, 200 received, 0% packet loss, time 9958ms
    rtt min/avg/max/mdev = 0.662/0.866/1.557/0.110 ms
    
    • huseyin tugrul buyukisik
      huseyin tugrul buyukisik over 7 years
      Also there is different encoding(10b/12b?) with new gbit ethernet cards and cables so it could have %25 more performance on top of 10x(when saturated) vs 100Mbit maybe?
  • Mike Renfro
    Mike Renfro about 13 years
    "Appreciably" is the key word here. I just checked two servers with identical hardware and low network load, but with different ethernet speeds. Average ping time (local, with the ping source on the same subnet) dropped from 0.21 milliseconds to 0.17 milliseconds. Pinging the same servers from home, each had a time of 53 milliseconds. There are way too many factors beyond your provider's control to make that a worthwhile upgrade.
  • Philip
    Philip about 13 years
    +1 Technically there is a difference, however it is completely inappreciable unless the particular application is incredibly sensitive to latency.
  • Andreas Richter
    Andreas Richter about 13 years
    Thank you for the test! From 0.21 to 0.17 is an improvement of 20%, which is great. I'm not concerned of the ping from home (50ms) but the time the request stays at the provider. We tweaked all processing (CPU) and non drive-IO (RAM/Cache/etc.) to the max, so now I question how much the network speed between the servers adds to the whole latency as total. E.g. we make ~20 Redis-Requests for one webserver-request. @MikeRenfro: can you do the same test with a higher loadsize? Normal ping is 30byte, avg. Redis around 250. I would expect the difference to grow.
  • Andreas Richter
    Andreas Richter about 13 years
    That is incorrect - simply compare the ping with differnt payload that is below the current MTU: ping server -i0.05 -c200 -s30 vs. ping server -i0.05 -c200 -s300 ... Or speaking in your example: the truck with 1mio DVDs would drive slower, because it is heavier than the one with 1 DVD.
  • raja
    raja about 13 years
    @andreas ping time isn't the whole story - so lets take fo rthe sake of argument that packets lower than the MTU arrive faster than packets at full MTU. the differenc eis you don't have all the data that the 1 packet has at full mtu in the same amount of time that the 2 packets take to arrive. The latency is the time taken for all the data to arrive. to go back to the truck analogy, even if it takes a truck with 1 Cd to arrive in half the time as the truck carrying 500 cds it still takes that truck 500 trips to deliver the data (750 days vs 3).
  • Andreas Richter
    Andreas Richter about 13 years
    @JimB: yes, as mentioned my question was not about loadsize, but about the speed: the full truck needs 10 hour to be scanned by customs, the small one only 1 hour. 100mbit/s also means, that a 100bit packet needs a theoretical minimum of 0,000954ms and a 1000bit packet a theoretical minimum of 0,00954ms. Of course processing time/etc. on the network card/switch/etc. make a higher chunk of the whole latency, but I would also expect these to be faster in a 1gbit switch, etc. Please see the comment by @MikeRenfro, he actually tested it and came to a 20% increase.
  • EEAA
    EEAA about 13 years
    @Andreas - yes, technically it was a 20% improvement. However, when you look at the entire system and all the variables that come into play when calculating latency (application logic, IO latency, etc.), that 20% becomes a tiny fraction of 1%.
  • Philip
    Philip about 13 years
    @Andreas; I think you totally missed the point of those comments. That's an improvement of 40 nanoseconds. An amount that is completely imperceivable to human beings. And that's not a cumulative number, it's not like each request takes 40 nanoseconds longer; it's just the first will be that much quicker, so the second will be lined up right behind it either way.
  • Philip
    Philip about 13 years
    Someone already ran the numbers for a particular server and the difference came back at 40 nanoseconds. Your guesstimate is off by a factor of 25 times too large.
  • raja
    raja about 13 years
    @andreas - 20% on the same subnet which is irrlevant to your question
  • Andreas Richter
    Andreas Richter about 13 years
    @ErikA: That depends on the system - as mentioned we make multiple requests to other servers for one webserver request (MySQL / Redis / etc.) and if the packages are bigger (e.g. 1000byte) the effect is even larger.
  • Andreas Richter
    Andreas Richter about 13 years
    @ChrisS: the quesion wasnt about perceivability - it was a question if somebody ever tested it and Mike did. It's also not 40 nano-seconds, it's micro-seconds, so you are missing the point by factor 1000... kudos. Belive me, that I know what I'm doing... 20% would be enough to justify the additional costs.
  • Andreas Richter
    Andreas Richter about 13 years
    @JimB: I have 40 servers all quite close to each other (0.2-0.6ms ping), so the same subnet is very important and the best setup, to measure the effect.
  • Andreas Richter
    Andreas Richter about 13 years
    @ShaneMadden: if nobody ever tested it, I can't know, the difference - that was the purpose of the question here, so I can evaluate if it will make a difference. In my application I make around 20-50 requests to other servers per webserver request and for normal sized packages I expect the effect to be even greater than with a 38 byte ping.
  • Andreas Richter
    Andreas Richter about 13 years
    @LatinSuD: thank you for the constructive approach and not blaming that I don't know what I'm doing. I will post the results in the actual question since I can do formating there. But btw. I would also expect the 90% overhead to have a speed increase, since the processors in the network cards, etc. are faster for GBit aswell (hopefully). @ChrisS: micro-seconds and I don't understand what you mean with the 25.
  • Philip
    Philip about 13 years
    My apologies for mixing-up micro and nano; in any case it's imperceivable. LatinSuD guesstimated a difference of 1 whole millisecond, which it 25 times more than the 40 microseconds found by Mike.
  • Andreas Richter
    Andreas Richter about 13 years
    Thank you, I only refer to Host-Switch-Host tests and I don't understand all the switch interna. I would simply love to see, if somebody ever benchmarked: Host-(100mbit)-Switch-(100mbit)-Host, Host-(100mbit)-Switch-(1gbit)-Host and Host-(1gbit)-Switch-(1gbit)-Host latency for different packet sizes. If nobody did, I will do it and post the answer here.
  • Andreas Richter
    Andreas Richter about 13 years
    @ChrisS: no worries. The 0,04ms was for a 38 byte ping, if our average server-server packet is around 300 byte, the difference could be 0,4ms. If we now make 20 requests for one webserver-requst (Redis, MySQL, etc.), this would lead to a 8ms speed increase which would be a 10% speed increase for current web-requests and would totally justify the additional costs. I simply dont have the ressources here to run the tests myself, but belive me, that even if it is not perceivable by humans, it still can be of relevance. Like electricity or god.
  • Sean
    Sean about 13 years
    I used to resell switch gear. Suffice it to say, your findings suggest to me that you're plugged in to a Cisco switch. There are other alternatives that provide lower latencies. As you rightly pointed out, more bandwidth doesn't translate in to lower latencies (Comcast is the primary culprit for making people dumb in this regard). Given you're in a hosted environment, you're probably stuck with their hardware (and since in a hosted environment, the extra few microsec aren't terribly crucial). Show me millions of pps at steady-state and I'll get interested in providing more details. :)
  • EEAA
    EEAA about 13 years
    @Andreas - you are going to need to do your own testing. There are way too many variables here to infer that you will experience the same results just because some person on the internet did a test and saw a 20% decrease in latency when going from 100Mbit to 1Gbit.
  • Philip
    Philip about 13 years
    @Andreas, I highly doubt it will scale like that; both a 10x larger packet being 10x less latency and 20x as many packets being 20x faster. If it does pan out, that's a 10% reduction in network overhead, you still have to account for the time spent in the application, which is likely to be one to several orders of magnitude longer than the network latency component. I hope it works out well for you in any case.
  • Andreas Richter
    Andreas Richter about 13 years
    @ErikA: yes I will do so, I simply hoped, somebody could have given me a benchmark of what I could expect. I will post my results here.
  • Andreas Richter
    Andreas Richter about 13 years
    @ChrisS: Thank you. Regarding the 10xlarger packets -> 10x less latency, I don't know why it shouldn't, but I will see. Regarding 20x packets I can assure you, that if 1 packet needs 1ms shorter, 20 packets will be 20ms faster. Regarding the 10% reduction (8ms): our whole turn-around time for a webserver-request is 80ms, so this already includes the application time. Most of the stuff is done in-memory and are easy calculations, it is simply much data that is being collected from many servers, so network does impact. That's similar to the architecture of Facebook, Amazon and Stackoverflow. '
  • Mike Renfro
    Mike Renfro about 13 years
    First test was with default ping settings (56 bytes of payload, 64 bytes with header). Same systems, with ping -s 200: .19 ms versus .26 ms. With ping -s 1000: .23 ms versus .56 ms. All these tests are just with regular ping, I'm not convinced network latency is the bottleneck, or that the rest of the servers will be consistent to sub-millisecond levels, but there you go.
  • Andreas Richter
    Andreas Richter about 13 years
    @Mike: Thank you! This is the answer I was looking for: 56 byte = 19% improvement, 200 byte = 27%, 1000 byte = 59%! So the larger the packet, the more it matters. And Gigabit increased only from 0.17ms (56 byte) to 0.23ms (1000 byte) => 35%, while 100 Mbit increased from 0.21 to 0.56 => 166%. Can you post your data as an answer?
  • Andreas Richter
    Andreas Richter about 13 years
    Thank you for all the great input: 1.) It's the same switch, 2.) A second NIC for internal/external sounds resonable and worth a try - even though e.g. MySQL/etc. are all bi-directional, so collisions would "only" be reduced to half, 3.) A realworld test is preferable to only Nic-Nic, Mike has done it with a subnet, and got what you expected reg. hardware: "56 byte = 19% improvement, 200 byte = 27%, 1000 byte = 59%! So the larger the packet, the more it matters. And Gigabit increased only from 0.17ms (56 byte) to 0.23ms (1000 byte) => 35%, while 100 Mbit increased from 0.21 to 0.56 => 166%".
  • Brian
    Brian over 11 years
    With most networking gear being store and forward a packet has to be fully received by a switch/router before it's passed on. Faster connections reduce this time which also reduces latency (as long as the connection doesn't get the speed from several parallel links).
  • Tyler
    Tyler almost 11 years
    Due to the explanation, this answer is by far the best on the page. The others seem to want to explain away this fact by assuming a special case - network performance across a long distance/lots of switches. That is important to consider, especially considering the OP's concern (webserver), but this answer also shows how much of a difference switch speed can make in a single hop.
  • Damon
    Damon over 6 years
    @AndreasRichter: I'm a couple of years late, but note that those saved microseconds are only a third of the equation. Latency is three things: The speed of light (or rather, electrons in copper wire, pretty much constant, the longer your cables the worse), the time needed to transmit N bits (which depends on the link speed), and the time needed to process packets (which, assuming the same budgets, you can consider constant). So, there is not much you can really effectively do in terms of reducing latency by cranking up link speed.
  • Andreas Richter
    Andreas Richter over 6 years
    @Damon: oh well, back than it was really important for me to optimize these tiny parts... some years later the company had hundreds of servers... some years later hundreds of employees. Interesting to come back here and see how focus on detail potentially helped me to get this happing! Thank you everybody!