Is it dangerous to change the value of /proc/sys/net/ipv4/tcp_tw_reuse?

35,508

Solution 1

You can safely reduce the time down, but you may run into issues with inproperly closed connections on networks with packet loss or jitter. I wouldn't start tuning at 1 second, start at 15-30 and work your way down.

Also, you really need to fix your application.

RFC 1185 has a good explanation in section 3.2:

When a TCP connection is closed, a delay of 2*MSL in TIME-WAIT state ties up the socket pair for 4 minutes (see Section 3.5 of [Postel81]. Applications built upon TCP that close one connection and open a new one (e.g., an FTP data transfer connection using Stream mode) must choose a new socket pair each time. This delay serves two different purposes:

 (a)  Implement the full-duplex reliable close handshake of TCP. 

      The proper time to delay the final close step is not really 
      related to the MSL; it depends instead upon the RTO for the 
      FIN segments and therefore upon the RTT of the path.* 
      Although there is no formal upper-bound on RTT, common 
      network engineering practice makes an RTT greater than 1 
      minute very unlikely.  Thus, the 4 minute delay in TIME-WAIT 
      state works satisfactorily to provide a reliable full-duplex 
      TCP close.  Note again that this is independent of MSL 
      enforcement and network speed. 

      The TIME-WAIT state could cause an indirect performance 
      problem if an application needed to repeatedly close one 
      connection and open another at a very high frequency, since 
      the number of available TCP ports on a host is less than 
      2**16.  However, high network speeds are not the major 
      contributor to this problem; the RTT is the limiting factor 
      in how quickly connections can be opened and closed. 
      Therefore, this problem will no worse at high transfer 
      speeds. 

 (b)  Allow old duplicate segements to expire. 

      Suppose that a host keeps a cache of the last timestamp 
      received from each remote host.  This can be used to reject 
      old duplicate segments from earlier incarnations of the 

*Note: It could be argued that the side that is sending a FIN knows what degree of reliability it needs, and therefore it should be able to determine the length of the TIME-WAIT delay for the FIN's recipient. This could be accomplished with an appropriate TCP option in FIN segments.

      connection, if the timestamp clock can be guaranteed to have 
      ticked at least once since the old conennection was open. 
      This requires that the TIME-WAIT delay plus the RTT together 
      must be at least one tick of the sender's timestamp clock. 

      Note that this is a variant on the mechanism proposed by 
      Garlick, Rom, and Postel (see the appendix), which required 
      each host to maintain connection records containing the 
      highest sequence numbers on every connection.  Using 
      timestamps instead, it is only necessary to keep one quantity 
      per remote host, regardless of the number of simultaneous 
      connections to that host.

Solution 2

I think it is fine to change this value to 1. A more appropriate way might be to use the command:

[root@server]# sysctl -w net.ipv4.tcp_tw_reuse=1

There are no obvious dangers that I know of, but a quick Google search produces this link which affirms that tcp_tw_reuse is the better alternative than tcp_tw_recycle, but should be used with caution regardless.

Solution 3

This doesn't answer your question (and it's 18 months late), but suggests another way of making your legacy app reuse ports:

A useful alternative to setting tcp_tw_reuse (or tcp_tw_recycle) on the system is to insert a shared library (using LD_PRELOAD) into your app; that library can then allow reuse of the port. This makes your legacy app allow port reuse without forcing this on all apps on your system (no modification of your app is required), thus limiting the impact of your tweak. For example,

    LD_PRELOAD=/opt/local/lib/libreuse.so ./legacy_app

This shared library should intercept the socket() call, call the real socket(), and set SO_REUSEADDR and/or SO_REUSEPORT on the returned socket. Look at http://libkeepalive.sourceforge.net for an example of how to do this (this turns on keepalives, but turning on SO_REUSEPORT is very similar). If your ill-behaved legacy app uses IPv6, remember to change line 55 of libkeepalive.c from

    if((domain == PF_INET) && (type == SOCK_STREAM)) {

to

    if(((domain == PF_INET) || (domain == PF_INET6)) && (type == SOCK_STREAM)) {

If you're stuck, send me email and I'll write the code and send it to you.

Share:
35,508

Related videos on Youtube

Sagar
Author by

Sagar

I'm a software engineer, working as part of an Infrastructure & Tools Team. My main interests are in Python, C, and C++ development, and continuous integration methodologies (including infrastructure tools such as Jenkins). I also love refactoring old/legacy software.

Updated on September 17, 2022

Comments

  • Sagar
    Sagar almost 2 years

    We have a couple of production systems that were recently converted into virtual machines. There is an application of ours that frequently accesses a MySQL database, and for each query it creates a connection, queries, and disconnects that connection.

    It is not the appropriate way to query (I know), but we have constraints that we can't seem to get around. Anyway, the issue is this: while the machine was a physical host, the program ran fine. Once converted to a virtual machine, we noticed intermittent connection issues to the database. There were, at one point, 24000+ socket connections in TIME_WAIT (on the physical host, the most I saw was 17000 - not good, but not causing problems).

    I would like these connections to be reused, so that we don't see that connection problem, and so:

    Questions:

    Is it ok to set the value of tcp_tw_reuse to 1? What are the obvious dangers? Is there any reason I should never do it?

    Also, is there any other way to get the system (RHEL/CentOS) to prevent so many connections from going into TIME_WAIT, or getting them to be reused?

    Lastly, what would changing tcp_tw_recycle do, and would that help me?

    In advance, thanks!

    • Admin
      Admin about 9 years
      This link explain well the danger of tcp_tw_recycle and tcp_tw_reuse. Don't use that.
  • Sagar
    Sagar over 13 years
    Thanks for the explanation. The problem is in the library, which I do not have control over.
  • Fantius
    Fantius over 11 years
    No, that's not what it says. It says (talking about tcp_tw_reuse), "It is generally a safer alternative to tcp_tw_recycle".