How to measure the response time between a server and a client that communicate using the UDP protocol?

12,817

Solution 1

Just use ping - RTT ( round trip time ) is one of the standard things it measures. If the size of the packets you're sending matters then ping also lets you specify the size of the data in each packet.

For example, I just sent 10 packets each with a 1024 byte payload to my gateway displaying only the summary statistics:

ping -c 10 -s 1024 -q 192.168.2.1

PING 192.168.2.1 (192.168.2.1) 1024(1052) bytes of data.

--- 192.168.2.1 ping statistics ---

10 packets transmitted, 10 received, 0% packet loss, time 9004ms

rtt min/avg/max/mdev = 2.566/4.921/8.411/2.035 ms

The last line starting with rtt ( round trip time ) is the info you're probably looking for.

Solution 2

I think the method you mention is fine. OS and computer load might interfere, but their effect would probably be negligible compared to the amount of time it takes to send the packets over the network.

To even things out a bit, you could always send several packets back and forth and average the times out.

Solution 3

If you have access to the code, then yes, just measure the time between when the request was sent and the receipt of the answer. Bear in mind that the standard timer in Java only has millisecond resolution.

Alternatively, use Wireshark to capture the packets on the wire - that software also records the timestamps against packets.

Clearly in both cases the measured time depends on how fast the other end responds to your original request.

If you really just want to measure network latency and control the far end yourself, use something like the echo 7/udp service that many UNIX servers still support (albeit it's usually disabled to prevent its use in reflected DDoS attacks).

Share:
12,817
Admin
Author by

Admin

Updated on June 05, 2022

Comments

  • Admin
    Admin almost 2 years

    The aim of the test is to check the shape of the network response time between two hosts (client and server). Network response = the round trip time it takes to send a packet of data and receive it back. I am using the UDP protocol. How could I compute the response time ? I could just subtract TimeOfClientRequest - TimeOfClientResponseRecieved. But I'm not sure if this is the best approach. I can't do this only from inside the code and I'm thinking that the OS and computer load might interfere in the measuring process initiated by the client. By the way, I'm using Java.

    I would like to listen to your ideas.