Detecting client TCP disconnection while using NetworkStream class

10,032

Solution 1

Is the server expecting to be sent multiple objects over the same connection? IF so I dont see how this code will work, as there is no delimiter being sent that signifies where the first object starts and the next object ends.

If only one object is being sent and the connection closed after, then the original code would work.

There has to be a network operation initiated in order to find out if the connection is still active or not. What I would do, is that instead of deserializing directly from the network stream, I would instead buffer into a MemoryStream. That would allow me to detect when the connection was lost. I would also use message framing to delimit multiple responses on the stream.

        MemoryStream ms = new MemoryStream();

        NetworkStream ns = client.GetStream();
        BinaryReader br = new BinaryReader(ns);

        // message framing. First, read the #bytes to expect.
        int objectSize = br.ReadInt32();

        if (objectSize == 0)
              break; // client disconnected

        byte [] buffer = new byte[objectSize];
        int index = 0;

        int read = ns.Read(buffer, index, Math.Min(objectSize, 1024);
        while (read > 0)
        {
             objectSize -= read;
             index += read;
             read = ns.Read(buffer, index, Math.Min(objectSize, 1024);
        }

        if (objectSize > 0)
        {
             // client aborted connection in the middle of stream;
             break;
        } 
        else
        {
            BinaryFormatter bf = new BinaryFormatter();
            using(MemoryStream ms = new MemoryStream(buffer))
            {
                 object o = bf.Deserialize(ns);
                 ReceiveMyObject(o);
            }
        }

Solution 2

Yeah but what if you lose a connection before getting the size? i.e. right before the following line:

// message framing. First, read the #bytes to expect. 

int objectSize = br.ReadInt32(); 

ReadInt32() will block the thread indefinitely.

Share:
10,032
Blair Holloway
Author by

Blair Holloway

Programmer, developer of video games, advocate of XNA Game Studio, occasional coffee addict.

Updated on June 04, 2022

Comments

  • Blair Holloway
    Blair Holloway almost 2 years

    A friend of mine came to me with a problem: when using the NetworkStream class on the server end of the connection, if the client disconnects, NetworkStream fails to detect it.

    Stripped down, his C# code looked like this:

    List<TcpClient> connections = new List<TcpClient>();
    TcpListener listener = new TcpListener(7777);
    listener.Start();
    
    while(true)
    {
        if (listener.Pending())
        {
            connections.Add(listener.AcceptTcpClient());
        }
        TcpClient deadClient = null;
        foreach (TcpClient client in connections)
        {
            if (!client.Connected)
            {
                deadClient = client;
                break;
            }
            NetworkStream ns = client.GetStream();
            if (ns.DataAvailable)
            {
                BinaryFormatter bf = new BinaryFormatter();
                object o = bf.Deserialize(ns);
                ReceiveMyObject(o);
            }
        }
        if (deadClient != null)
        {
            deadClient.Close();
            connections.Remove(deadClient);
        }
        Thread.Sleep(0);
    }
    

    The code works, in that clients can successfully connect and the server can read data sent to it. However, if the remote client calls tcpClient.Close(), the server does not detect the disconnection - client.Connected remains true, and ns.DataAvailable is false.

    A search of Stack Overflow provided an answer - since Socket.Receive is not being called, the socket is not detecting the disconnection. Fair enough. We can work around that:

    foreach (TcpClient client in connections)
    {
        client.ReceiveTimeout = 0;
        if (client.Client.Poll(0, SelectMode.SelectRead))
        {
            int bytesPeeked = 0;
            byte[] buffer = new byte[1];
            bytesPeeked = client.Client.Receive(buffer, SocketFlags.Peek);
            if (bytesPeeked == 0)
            {
                deadClient = client;
                break;
            }
            else
            {
                NetworkStream ns = client.GetStream();
                if (ns.DataAvailable)
                {
                    BinaryFormatter bf = new BinaryFormatter();
                    object o = bf.Deserialize(ns);
                    ReceiveMyObject(o);
                }
            }
        }
    }
    

    (I have left out exception handling code for brevity.)

    This code works, however, I would not call this solution "elegant". The other elegant solution to the problem I am aware of is to spawn a thread per TcpClient, and allow the BinaryFormatter.Deserialize (née NetworkStream.Read) call to block, which would detect the disconnection correctly. Though, this does have the overhead of creating and maintaining a thread per client.

    I get the feeling that I'm missing some secret, awesome answer that would retain the clarity of the original code, but avoid the use of additional threads to perform asynchronous reads. Though, perhaps, the NetworkStream class was never designed for this sort of usage. Can anyone shed some light?

    Update: Just want to clarify that I'm interested to see if the .NET framework has a solution that covers this use of NetworkStream (i.e. polling and avoiding blocking) - obviously it can be done; the NetworkStream could easily be wrapped in a supporting class that provides the functionality. It just seemed strange that the framework essentially requires you to use threads to avoid blocking on NetworkStream.Read, or, to peek on the socket itself to check for disconnections - almost like it's a bug. Or a potential lack of a feature. ;)

  • Blair Holloway
    Blair Holloway over 14 years
    This allows disconnection detection by interpreting the result of the Read calls. However, my curiosity stems from the framework's apparent lack of "another" way to do this, especially given the elegance it affords the original implementation. Also, the original code was correctly delineating objects, so multiple messages could be sent. I assume that this is due to the way BinaryFormatter serializes the objects - it inherently knows when it's "done". (Though, the code you have presented does include the need for a delimiter.)
  • feroze
    feroze over 14 years
    Can you tell me how would you want to see this implemented in the framework? The Framework is just a wrapper on top of sockets, which is a transport mechanism. Other semantics need to be implemented by the application. Also, BinaryFormatter takes a buffer as input for deserialization, therefore you need to give it the exact buffer needed to get the object back. Otherwise it wont work as expected.
  • Blair Holloway
    Blair Holloway over 14 years
    You are right - deserialization does need an exact buffer or it will die painfully, and it's not BinaryFormatter's responsibility to ensure the stream is complete; in this case, nor is it NetworkStream's. I think perhaps I've been thinking about this the wrong way, and that it's not the Framework's responsibility, but mine. ;)