A theoretical problem when coding a BUSY TCP/UDP server [modified]
-
Hi, I would like to code a TCP/UDP server and I've coded a major part of it but it seems like if the server is busy, then I will have a problem. Here is my Listening code:
private void Listen()
{
Console.WriteLine( "Listening port " + port.ToString() +"." );
listener.Start();while (KeepWorking)
{
// Waiting until a client requests to connect.
TcpClient client = listener.AcceptTcpClient();Console.WriteLine("---------------------"); Console.WriteLine(client.Client.RemoteEndPoint.ToString() + " connected."); ClientHistory.Add( client.Client.RemoteEndPoint.ToString() ); ThreadPool.QueueUserWorkItem(new WaitCallback(HandleCommunication), client);
}
}If more than one client requests to connect at the same time then one won't be able to connect because I am not listening. If a client connects and before it's queued to ThreadPool another client wants to connect, it won't be able to connect because I am not listening again. Is there a way to solve this or is this how every server works? :) Thanks.
modified on Tuesday, January 26, 2010 5:24 AM
-
Hi, I would like to code a TCP/UDP server and I've coded a major part of it but it seems like if the server is busy, then I will have a problem. Here is my Listening code:
private void Listen()
{
Console.WriteLine( "Listening port " + port.ToString() +"." );
listener.Start();while (KeepWorking)
{
// Waiting until a client requests to connect.
TcpClient client = listener.AcceptTcpClient();Console.WriteLine("---------------------"); Console.WriteLine(client.Client.RemoteEndPoint.ToString() + " connected."); ClientHistory.Add( client.Client.RemoteEndPoint.ToString() ); ThreadPool.QueueUserWorkItem(new WaitCallback(HandleCommunication), client);
}
}If more than one client requests to connect at the same time then one won't be able to connect because I am not listening. If a client connects and before it's queued to ThreadPool another client wants to connect, it won't be able to connect because I am not listening again. Is there a way to solve this or is this how every server works? :) Thanks.
modified on Tuesday, January 26, 2010 5:24 AM
From the documentation[^] "Start will queue incoming connections until you either call the Stop method or it has queued MaxConnections" That's still open to multiple interpretations.. but I think it means that your problem does not exist because it keeps queuing new connections when you're not blocking on Accept. Anyway, even if it didn't work that way, the time window in which it can go wrong is infinitesimal, unless
ClientHistory.Add
does something other than expected (maybe do a blocking write to a logfile or so.. it looks like adding to aList<string>
) -
From the documentation[^] "Start will queue incoming connections until you either call the Stop method or it has queued MaxConnections" That's still open to multiple interpretations.. but I think it means that your problem does not exist because it keeps queuing new connections when you're not blocking on Accept. Anyway, even if it didn't work that way, the time window in which it can go wrong is infinitesimal, unless
ClientHistory.Add
does something other than expected (maybe do a blocking write to a logfile or so.. it looks like adding to aList<string>
)OK, let's assume that I've deleted all those ClientHistory lines. It still seems like two clients can't connect at the exact same time. Between AcceptTcpClient() and QueueUserWorkItem(), server doesn't accept new connections for miliseconds.
-
OK, let's assume that I've deleted all those ClientHistory lines. It still seems like two clients can't connect at the exact same time. Between AcceptTcpClient() and QueueUserWorkItem(), server doesn't accept new connections for miliseconds.
-
Does it really happen between them, or sometime after the QueueUserWorkItem? Does the connection handler send/receive any data? A NIC can only do 1 thing at once so during the send/receive no one can connect (they will be delayed)
I've never experienced this problem I am talking about. As assumed that, if enough number of clients try to connect this server, they wouldn't be able to.
-
I've never experienced this problem I am talking about. As assumed that, if enough number of clients try to connect this server, they wouldn't be able to.
Ok, well I was in a good mood and decompiled
TcpListener.Start
for you.public void Start()
{
this.Start(0x7fffffff);
}
public void Start(int backlog)
{
if ((backlog > 0x7fffffff) || (backlog < 0))
{
throw new ArgumentOutOfRangeException("backlog");
}
if (Logging.On)
{
Logging.Enter(Logging.Sockets, this, "Start", (string) null);
}
if (this.m_ServerSocket == null)
{
throw new InvalidOperationException(SR.GetString("net_InvalidSocketHandle"));
}
if (this.m_Active)
{
if (Logging.On)
{
Logging.Exit(Logging.Sockets, this, "Start", (string) null);
}
}
else
{
this.m_ServerSocket.Bind(this.m_ServerSocketEP);
this.m_ServerSocket.Listen(backlog); <--- !!see here!!
this.m_Active = true;
if (Logging.On)
{
Logging.Exit(Logging.Sockets, this, "Start", (string) null);
}
}
}So the result is a call to
SomeSocket.Listen(int.MaxValue)
, which according to the documentation[^]: "Places a Socket in a listening state. Int32 backlog: The maximum length of the pending connections queue." In other words, don't worry, the connections are being placed in the backlog. Of course it can happen that connections are being made at a faster rate than you can handle them, which is really just a denial of service attack. You can't do anything about that, no matter how many connections per second you can handle, they could just throw 1 more at you and you'd lose. So it all comes down to "magic happens in the background in code that you didn't write", in a more theoretical setting in which connections are only accepted while you are waiting for them, you would have a theoretical problem. Fortunately real world sockets don't work that way :) -
Ok, well I was in a good mood and decompiled
TcpListener.Start
for you.public void Start()
{
this.Start(0x7fffffff);
}
public void Start(int backlog)
{
if ((backlog > 0x7fffffff) || (backlog < 0))
{
throw new ArgumentOutOfRangeException("backlog");
}
if (Logging.On)
{
Logging.Enter(Logging.Sockets, this, "Start", (string) null);
}
if (this.m_ServerSocket == null)
{
throw new InvalidOperationException(SR.GetString("net_InvalidSocketHandle"));
}
if (this.m_Active)
{
if (Logging.On)
{
Logging.Exit(Logging.Sockets, this, "Start", (string) null);
}
}
else
{
this.m_ServerSocket.Bind(this.m_ServerSocketEP);
this.m_ServerSocket.Listen(backlog); <--- !!see here!!
this.m_Active = true;
if (Logging.On)
{
Logging.Exit(Logging.Sockets, this, "Start", (string) null);
}
}
}So the result is a call to
SomeSocket.Listen(int.MaxValue)
, which according to the documentation[^]: "Places a Socket in a listening state. Int32 backlog: The maximum length of the pending connections queue." In other words, don't worry, the connections are being placed in the backlog. Of course it can happen that connections are being made at a faster rate than you can handle them, which is really just a denial of service attack. You can't do anything about that, no matter how many connections per second you can handle, they could just throw 1 more at you and you'd lose. So it all comes down to "magic happens in the background in code that you didn't write", in a more theoretical setting in which connections are only accepted while you are waiting for them, you would have a theoretical problem. Fortunately real world sockets don't work that way :)Thank you. I know understand it all. :)