Avoiding UDP client-side packet loss
-
Hi, I'm trying to write a simple UDP client application which receives UDP packets from a server, each the size of 384 bytes. Unfortunately, I'm encountring what seems to be client-side packet loss, which gets only worse when multiple threads are added to the application. This is not a result of high CPU usage, since the CPU usage is only a few percent. I can only assume that the packet loss is a result of context switching done by the OS, and my application not being able to receive the packets on time. I'm using VC++ on win2k, and the winsock2 functions in the platform SDK to create a socket and receive UDP packets from a specified port. I've tried setting the process & thread to time critical, which does have an improvement, but the packet loss continues. Is there any way to avoid client-side UDP packet loss ?? Ideas / code examples would be greatly appreciated ! Danny
-
Hi, I'm trying to write a simple UDP client application which receives UDP packets from a server, each the size of 384 bytes. Unfortunately, I'm encountring what seems to be client-side packet loss, which gets only worse when multiple threads are added to the application. This is not a result of high CPU usage, since the CPU usage is only a few percent. I can only assume that the packet loss is a result of context switching done by the OS, and my application not being able to receive the packets on time. I'm using VC++ on win2k, and the winsock2 functions in the platform SDK to create a socket and receive UDP packets from a specified port. I've tried setting the process & thread to time critical, which does have an improvement, but the packet loss continues. Is there any way to avoid client-side UDP packet loss ?? Ideas / code examples would be greatly appreciated ! Danny
UDP is connectionless, so it's always possible you will lose packets. Try to use TCP sockets, or build some kind of protocol (using UDP) to avoid packet loss. You can play with the network card driver buffer settings, or using bigger packet size, or decrease the frequency. And if you are not using a real-time OS, you cannot be sure the OS will store all the incoming packets. Zolee
-
Hi, I'm trying to write a simple UDP client application which receives UDP packets from a server, each the size of 384 bytes. Unfortunately, I'm encountring what seems to be client-side packet loss, which gets only worse when multiple threads are added to the application. This is not a result of high CPU usage, since the CPU usage is only a few percent. I can only assume that the packet loss is a result of context switching done by the OS, and my application not being able to receive the packets on time. I'm using VC++ on win2k, and the winsock2 functions in the platform SDK to create a socket and receive UDP packets from a specified port. I've tried setting the process & thread to time critical, which does have an improvement, but the packet loss continues. Is there any way to avoid client-side UDP packet loss ?? Ideas / code examples would be greatly appreciated ! Danny
You probably don't want this answer but... Yes, it's called TCP. Before flaming me consider the U in UDP. It's there for a reason. If you can't deal with lost packets (by means of reckoning, ignoring, application-initiated retransmissions and so on) go ahead and use TCP. Ofcourse you can try to minimize lost packets but since you never can be sure you got em all, why bother? Just my 2cents... "Well I'm just a hard working corporate slave, my mind should hate what my body does crave. Well I'm just a humble corporate slave, driving myself into a corporate grave" Corporate Slave, SNOG
-
Hi, I'm trying to write a simple UDP client application which receives UDP packets from a server, each the size of 384 bytes. Unfortunately, I'm encountring what seems to be client-side packet loss, which gets only worse when multiple threads are added to the application. This is not a result of high CPU usage, since the CPU usage is only a few percent. I can only assume that the packet loss is a result of context switching done by the OS, and my application not being able to receive the packets on time. I'm using VC++ on win2k, and the winsock2 functions in the platform SDK to create a socket and receive UDP packets from a specified port. I've tried setting the process & thread to time critical, which does have an improvement, but the packet loss continues. Is there any way to avoid client-side UDP packet loss ?? Ideas / code examples would be greatly appreciated ! Danny
UDP is connectionless, so it's always possible you will lose packets. Try to use TCP sockets, or build some kind of protocol (using UDP) to avoid packet loss. You can play with the network card driver buffer settings, or using bigger packet size, or decrease the frequency. If you are not using a real-time OS, you cannot be sure the OS will store all the incoming packets. Like a driver will run on higher priority than your process, and it's possible you'll lose packets. Zolee
-
You probably don't want this answer but... Yes, it's called TCP. Before flaming me consider the U in UDP. It's there for a reason. If you can't deal with lost packets (by means of reckoning, ignoring, application-initiated retransmissions and so on) go ahead and use TCP. Ofcourse you can try to minimize lost packets but since you never can be sure you got em all, why bother? Just my 2cents... "Well I'm just a hard working corporate slave, my mind should hate what my body does crave. Well I'm just a humble corporate slave, driving myself into a corporate grave" Corporate Slave, SNOG
Before flaming me consider the U in UDP. It's there for a reason, U stands for "user", I don't see how this is relevant ;P Joaquín M López Muñoz Telefónica, Investigación y Desarrollo
-
Before flaming me consider the U in UDP. It's there for a reason, U stands for "user", I don't see how this is relevant ;P Joaquín M López Muñoz Telefónica, Investigación y Desarrollo
It's the answer to the question: Who will implement everything lacking to make this really usefull? "Well I'm just a hard working corporate slave, my mind should hate what my body does crave. Well I'm just a humble corporate slave, driving myself into a corporate grave" Corporate Slave, SNOG
-
Hi, I'm trying to write a simple UDP client application which receives UDP packets from a server, each the size of 384 bytes. Unfortunately, I'm encountring what seems to be client-side packet loss, which gets only worse when multiple threads are added to the application. This is not a result of high CPU usage, since the CPU usage is only a few percent. I can only assume that the packet loss is a result of context switching done by the OS, and my application not being able to receive the packets on time. I'm using VC++ on win2k, and the winsock2 functions in the platform SDK to create a socket and receive UDP packets from a specified port. I've tried setting the process & thread to time critical, which does have an improvement, but the packet loss continues. Is there any way to avoid client-side UDP packet loss ?? Ideas / code examples would be greatly appreciated ! Danny
Note that if sending a UDP messages from several threads and before ARP resolution is done (because ARP entry has expired or is the first message sent), it will cause some (or all?) UDP messages to be lost. This is (according to MSDN) due to the fact that there can only one pending ARP-reply per host-IP. Sorry I cannot find the ref. at the moment. Remedy: call SendARP() before sendto(). Gisle V.
"If you feel paranoid it doesn't mean they're not after you!" -- Woody Allen