Detecting Socket Disconnect Using TCP KeepAlive

24,013

Solution 1

You will not get far with the built-in keep-alives of the TCP stack. That's because the keep-alive interval cannot be tuned by your application, it is set by the OS, and the defaults are rather high (hours). This is not specific to Java.

If you need to time out in a reasonable time, you have to implement some kind of keep alive in the protocol to be used. Most of the high-level protocols I have seen have some kind of NOP functionality, where you send an "Are you there?" message and the other party sends a "Yes, I'm here" reply without doing anything else.

Solution 2

For an in-depth discussion of TCP Keep-Alives see my answer here.

But basically TCP Keep-Alives are likely the best method for detecting a stale connection. The main problem is that OS defaults are set at 2 hours before the connection is checked with 11 more minutes of Keep-Alive packets before the connection will actually be dropped.

Don't write your own application-layer Keep Alive protocol when TCP already has it built in. All you have to do is set the TCP time out to something more reasonable like 2-3 minutes.

Unfortunately, since TCP timeouts are managed at the OS level and not from within the JVM, it is difficult (but not impossible) to configure TCP timeouts from within your code on a per-socket basis.

Share:
24,013
Admin
Author by

Admin

Updated on August 02, 2020

Comments

  • Admin
    Admin over 3 years

    I'm developing a server that hosts 3rd party devices over TCP/IP and have been experiencing sudden connection drops (the devices are connecting via cellular). I need to find a way to detect a disconnect without having to write data to the device itself.

    I've looked at using the TCP keepalive functionality but Java doesn't appear to allow any adjustment of the timing of the keepalive operations.

    Is there any suggested method for doing this?

    My simplified socket code is as follows:

    public class Test2Socket {
        public static void main(String[] args) {
            try {
                ServerSocket skt = new ServerSocket(1111);
    
                Socket clientSocket = skt.accept();
    
                clientSocket.setKeepAlive(true);
    
                System.out.println("Connected..");
    
                BufferedReader input = new BufferedReader(new InputStreamReader(clientSocket.getInputStream()));
    
                String inputLine;
    
                while((inputLine = input.readLine()) != null)
                {
                    System.out.println(inputLine);
                }
    
            } catch (IOException e) {
                e.printStackTrace();
            }
        }
    }
    

    Any feedback would be greatly appreciated.

  • Andreas Florath
    Andreas Florath about 9 years
    The first part of your question is simply not true. Of course there are system calls to set the timers: setsockopt(... TCP_KEEPIDLE ...).
  • Andreas Florath
    Andreas Florath about 9 years
    The proposal in the second part is dangerous and adds complexity to client and server: you mix up levels 4 and 7 here (and must sort out all things manually). TCP gives you what you want to have - why are you doing it again? There are technical solutions - even for Java.
  • Laszlo Valko
    Laszlo Valko about 9 years
    @AndreasFlorath: I think the original poster would really love to see: a) a link to the documentation about this wonderful system call on Windows, Mac OS X, etc.; b) how to use these from Java, in a portable way.
  • user207421
    user207421 over 8 years
    @AndreasFlorath Socket options to set per-socket keepalive timers do not exist on all platforms, and not in Java.
  • R.G.
    R.G. about 7 years
    @LaszloValko: here's a link to call on Windows: msdn.microsoft.com/en-us/library/windows/desktop/…
  • Laszlo Valko
    Laszlo Valko about 7 years
    @R.G.: You are missing the point. SO_KEEPALIVE is not good enough.
  • user207421
    user207421 almost 3 years
    The question is about Java, and Java supports read timeouts from TCP sockets on all platforms. Nothing more is required.
  • Laszlo Valko
    Laszlo Valko almost 3 years
    @user207421 Well, you need one more thing: a client sending another request before your read timeout happens. Now how do you coerce the client to send something if it is just not willing to?