Flow control using Asynch Sockets
-
Hey Im having a problem using the sockets asynchronous BeginSend method. My server application is sending a large amount of data in small chunks and therefore I am calling BeginSend many times in a short period of time. I believe that the recieving application cant handle the speed and therefore the sender is queuing the calls until they can be sent. This means that memory is constantly increasing on the sender application. I think that I need to implement some sort of flow control to stop calling BeginSend when there are to many calls pending. Can anyone help me out or point me to some literature on this issue? Thanks
-
Hey Im having a problem using the sockets asynchronous BeginSend method. My server application is sending a large amount of data in small chunks and therefore I am calling BeginSend many times in a short period of time. I believe that the recieving application cant handle the speed and therefore the sender is queuing the calls until they can be sent. This means that memory is constantly increasing on the sender application. I think that I need to implement some sort of flow control to stop calling BeginSend when there are to many calls pending. Can anyone help me out or point me to some literature on this issue? Thanks
This is the pattern I'm familiar with (in pseudocode):
Socket socket;
Queue<Data> pendingData;
bool sending;public void SendAsync (Data data)
{
if (sending)
{
pendingData.Add(data);
}
else
{
sending = true;
socket.BeginSend(data, new Callback(SendCallback));
}
}private void SendCallback ()
{
socket.EndSend();if (pendingData.Count > 0) { socket.BeginSend(pendingData.DequeueReasonableAmountOfData(), new Callback(SendCallback)); } else { sending = false; }
}
This maintains only one outstanding asynchronous send operation at a time and the amount of data queued to be sent can easily be handled by user code. This is useful in case the queued data needs to be resent if the client disconnects/reconnects, or if you want to put a limit on how much data to queue to send or perhaps prioritize the queued data in some way. Of course another important part would be to make sure the receiver is receiving in such a way that parsing the data isn't blocking the socket from immediately receiving the next packet being sent.
:badger:
-
This is the pattern I'm familiar with (in pseudocode):
Socket socket;
Queue<Data> pendingData;
bool sending;public void SendAsync (Data data)
{
if (sending)
{
pendingData.Add(data);
}
else
{
sending = true;
socket.BeginSend(data, new Callback(SendCallback));
}
}private void SendCallback ()
{
socket.EndSend();if (pendingData.Count > 0) { socket.BeginSend(pendingData.DequeueReasonableAmountOfData(), new Callback(SendCallback)); } else { sending = false; }
}
This maintains only one outstanding asynchronous send operation at a time and the amount of data queued to be sent can easily be handled by user code. This is useful in case the queued data needs to be resent if the client disconnects/reconnects, or if you want to put a limit on how much data to queue to send or perhaps prioritize the queued data in some way. Of course another important part would be to make sure the receiver is receiving in such a way that parsing the data isn't blocking the socket from immediately receiving the next packet being sent.
:badger:
Thanks this is exactly what I needed.
-
Thanks this is exactly what I needed.