Which is the best approach to send large UDP packets in sequence

14,398

Solution 1

It's exactly the problem you described. Each datagram you broadcast is split into 44 packets. If any one of those is lost, the datagram is lost. As soon as you have enough traffic to cause, say, 1% packet loss, you have 35% datagram loss. 2% packet loss equals 60% datagram loss.

You need to keep your broadcast datagrams small enough not to fragment. If you have a stream of 65,507 byte chunks such that you cannot change the fact that you must have the whole chunk for the data to be useful, then naive UDP broadcast was bad choice.

I'd have to know a lot more about the specifics of your application to make a sensible recommendation. But if you have a chunk of data around 64KB such that you need the whole chunk for the data to be useful, and you can't change that, then you should be using an approach that divides that data into pieces with some redundancy such that some pieces can be lost. With erasure coding, you can divide 65,507 bytes of data into 46 chunks, each 1,490 bytes, such that the original data can be reconstructed from any 44 chunks. This would tolerate moderate datagram loss with only about a 4% increase in data size.

Solution 2

TCP is used specifically instead of UDP when you need reliable and correctly ordered delivery. But assuming you really need UDP for broadcasting, you could:

  1. debug the network to see how & where packets are lost, or maybe it is the receiver that is clogged/lagged. But often you don't have control over these things. Is a WiFi network involved? If so it's hard to get good QoS.

  2. do something on the application layer to ensure ordering and reliable delivery. For example, SIP normally uses UDP, but the protocol uses transactions and sequence numbers so clients & servers will retransmit messages as needed.

  3. implement packet loss concealment. Using maths, the receiver can recreate a lost packet, analogous to how a RAID disk setup can lose drives and still function.

That your setup works fine for a minute and then doesn't is a hint that there is either network congestion or software congestion on the broadcast or receiver side.

Can you do some packet captures with Wireshark and share the results?

Share:
14,398
Fernando Siqueira
Author by

Fernando Siqueira

Updated on June 05, 2022

Comments

  • Fernando Siqueira
    Fernando Siqueira almost 2 years

    I have an android application that needs to send data through the protocol UDP every 100 milliseconds. Each UDP packet has 15000 bytes average. packets are sent in broadcast

    Every 100 milliseconds lines below are run through a loop.

    DatagramPacket sendPacket = new DatagramPacket(sendData, sendData.length, broadcast, 9876); 
    clientSocket.send(sendPacket);
    

    Application starts working fine, but after about 1 minute frequency of received packets decreases until the packets do not arrive over the destination.

    The theoretical limit (on Windows) for the maximum size of a UDP packet is 65507 bytes

    I know the media MTU of a network is 1500 bytes and when I send a packet bigger it is broken into several fragments and if a fragment does not reach the destination the whole package is lost.

    I do not understand why at first 1 minute the packets are sent correctly and after a while the packets do not arrive more. So I wonder what would be the best approach to solve this problem?

  • SJuan76
    SJuan76 over 10 years
    (To the OP)An option would be splitting the UDP message into several UDP messages smaller so if one packet is lost, only a minor part of the data is lost. Of course that means that you probably need a mechanism for "rebuilding" the original message at the receiver, which may ask for the repetition of a missing datagram. Effectively implementing (something like) TCP on UDP.
  • Fernando Siqueira
    Fernando Siqueira over 10 years
    I have an android application that uses opencv to analyze frames from a camera and send these frames via UDP to another device. I'll try to compress more frames and divide into diagrams with redundancy and I study about Erasure code. Thanks for the tip. I just do not understand why there is a greater loss of packages after an average of 1 minute of execution.
  • Fernando Siqueira
    Fernando Siqueira over 10 years
    I did not know Wireshark. I'll read about it. I had not thought to do a debug on the network. might be a good idea for me to understand what is happening and learn a bit more about it. Thanks for the tip.
  • Fernando Siqueira
    Fernando Siqueira over 10 years
    I just chose UDP because my application to send a sequence of images, one image per package, and if the application loses some packages that would be no problem. I want the transmission to be as quickly possible so I chose UDP. The data is transmitted via a private network using WIFI.
  • David Schwartz
    David Schwartz over 10 years
    Are you maxing out the bandwidth or is there plenty of headroom?
  • AaronLS
    AaronLS over 9 years
    TCP would suffer from the same performance problem, as the entire packet is discarded and must be resent if a single fragment is lost. This is because reliability is implemented at the packet level, not the fragment level. However TCP stacks almost always try to split a message into packets smaller than the MTU and thus avoid fragmentation. Essentially pre-fragmenting the message at the TCP layer into smaller packets, such that if a single small packet is lost, the TCP layer can resend it and rebuild the larger message. For unavoidably large messages, TCP may perform better than UDP.