recv no blocking
-
Hi, I'm developing a server multithread (multiclient) and I have a problem with recv function (c language - SO: windows). My clients send info to server every 5 seconds and recv function blocks thread until info arrives. I try with ioctlsocket, but the number of byte read always is 0; I try with flag MSG_PEEK, but the data is copied into the buffer, but is not removed from the input queue and so I have many messages. Is there a way to perform recv function no blocking? THANKS
-
Hi, I'm developing a server multithread (multiclient) and I have a problem with recv function (c language - SO: windows). My clients send info to server every 5 seconds and recv function blocks thread until info arrives. I try with ioctlsocket, but the number of byte read always is 0; I try with flag MSG_PEEK, but the data is copied into the buffer, but is not removed from the input queue and so I have many messages. Is there a way to perform recv function no blocking? THANKS
Usually a blocking
recv
is not a problem at all in a multithreaded application: the receiving thread blocks but the other threads keep runnning and the application is responsive. On the other hand, as you may find in the documentation[^], you might use non blocking mode on sockets, however, I guess, your application would be more involved. -
Usually a blocking
recv
is not a problem at all in a multithreaded application: the receiving thread blocks but the other threads keep runnning and the application is responsive. On the other hand, as you may find in the documentation[^], you might use non blocking mode on sockets, however, I guess, your application would be more involved. -
Thanks for reply. The problem is, if my server must send data to client, it waits 5 seconds because recv function blocks thread. I try to launch another thread (for each client) for send operation, but cpu works 100%.
Quote:
The problem is, if my server must send data to client, it waits 5 seconds because recv function blocks thread.
As far as I know, the server's
send
is not blocked by the clientrecv
.Quote:
I try to launch another thread (for each client) for send operation, but cpu works 100%.
Waiting on
I/O
operations should not consumeCPU
, there's probably a flawn in your code. -
Quote:
The problem is, if my server must send data to client, it waits 5 seconds because recv function blocks thread.
As far as I know, the server's
send
is not blocked by the clientrecv
.Quote:
I try to launch another thread (for each client) for send operation, but cpu works 100%.
Waiting on
I/O
operations should not consumeCPU
, there's probably a flawn in your code.I post my code:
DWORD WINAPI cliTh1( LPVOID lpData ){
struct CLIENT_INFO *pClientInfo;
char szClientMsg[250];
char packet[50];
HANDLE clientTxThread;pClientInfo = (struct CLIENT\_INFO \*)lpData ; char \*ip = inet\_ntoa(pClientInfo->clientAddr.sin\_addr); printf("SOCKET:%d - IP:%s - THREAD\_ID:%ld\\n", pClientInfo->hClientSocket, ip ,GetCurrentThreadId()); Q->pClient = pClientInfo; pClientInfo->primaConn = 0; pClientInfo->indirizzo = 0; pClientInfo->p = getDisconnect; while ( 1 ){ if(j>=MAXELEMENTS){j=0;}; if(WSAGetLastError()){ if(pClientInfo->primaConn == 1){ disconnectBuffer\[pClientInfo->indirizzo\] = 1; pClientInfo->primaConn=0; } if(disconnectBuffer\[pClientInfo->indirizzo\] == 1){ creaPackDisc(packet, pClientInfo->indirizzo); strcpy(bufferRx\[j\].packet, packet); Enqueue(Q, packet); j++; }else{ closesocket(pClientInfo->hClientSocket); ExitThread(pClientInfo->txThId); ExitThread(GetCurrentThreadId()); } Sleep(1000); }else{ if((pClientInfo->primaConn == 0) && (pClientInfo->indirizzo != 0)){ pClientInfo->connessione = setConnect(GetCurrentThreadId(), pClientInfo->indirizzo, ip); if(pClientInfo->connessione == 1){ pClientInfo->primaConn = 1; } //THREAD TX for each client clientTxThread = CreateThread(NULL,0,(LPTHREAD\_START\_ROUTINE)txThread, pClientInfo,0,&pClientInfo->txThId); if ( clientTxThread == NULL ){ printf("Unable to create client thread"); } else { CloseHandle( clientTxThread ) ; } } if(recv( pClientInfo -> hClientSocket, szClientMsg, sizeof( szClientMsg ), 0 ) > 0){ strcpy(bufferRx\[j\].packet, szClientMsg); memset(&szClientMsg\[0\], 0, sizeof(szClientMsg)); pClientInfo->indirizzo = calcolaHighLow(bufferRx\[j\].packet\[1\], bufferRx\[j\].packet\[2\], bufferRx\[j\].packet\[3\], bufferRx\[j\].packet\[4\]); Enqueue(Q, bufferRx\[j\].packet);
-
I post my code:
DWORD WINAPI cliTh1( LPVOID lpData ){
struct CLIENT_INFO *pClientInfo;
char szClientMsg[250];
char packet[50];
HANDLE clientTxThread;pClientInfo = (struct CLIENT\_INFO \*)lpData ; char \*ip = inet\_ntoa(pClientInfo->clientAddr.sin\_addr); printf("SOCKET:%d - IP:%s - THREAD\_ID:%ld\\n", pClientInfo->hClientSocket, ip ,GetCurrentThreadId()); Q->pClient = pClientInfo; pClientInfo->primaConn = 0; pClientInfo->indirizzo = 0; pClientInfo->p = getDisconnect; while ( 1 ){ if(j>=MAXELEMENTS){j=0;}; if(WSAGetLastError()){ if(pClientInfo->primaConn == 1){ disconnectBuffer\[pClientInfo->indirizzo\] = 1; pClientInfo->primaConn=0; } if(disconnectBuffer\[pClientInfo->indirizzo\] == 1){ creaPackDisc(packet, pClientInfo->indirizzo); strcpy(bufferRx\[j\].packet, packet); Enqueue(Q, packet); j++; }else{ closesocket(pClientInfo->hClientSocket); ExitThread(pClientInfo->txThId); ExitThread(GetCurrentThreadId()); } Sleep(1000); }else{ if((pClientInfo->primaConn == 0) && (pClientInfo->indirizzo != 0)){ pClientInfo->connessione = setConnect(GetCurrentThreadId(), pClientInfo->indirizzo, ip); if(pClientInfo->connessione == 1){ pClientInfo->primaConn = 1; } //THREAD TX for each client clientTxThread = CreateThread(NULL,0,(LPTHREAD\_START\_ROUTINE)txThread, pClientInfo,0,&pClientInfo->txThId); if ( clientTxThread == NULL ){ printf("Unable to create client thread"); } else { CloseHandle( clientTxThread ) ; } } if(recv( pClientInfo -> hClientSocket, szClientMsg, sizeof( szClientMsg ), 0 ) > 0){ strcpy(bufferRx\[j\].packet, szClientMsg); memset(&szClientMsg\[0\], 0, sizeof(szClientMsg)); pClientInfo->indirizzo = calcolaHighLow(bufferRx\[j\].packet\[1\], bufferRx\[j\].packet\[2\], bufferRx\[j\].packet\[3\], bufferRx\[j\].packet\[4\]); Enqueue(Q, bufferRx\[j\].packet);
-
Are you testing with both client and server running on the same machine? Have a look at this thread: "send and recv are cpu intensive?"[^].
-
Yes, I'm testing clients with server on the same machine. My goal is: if the system works fine on my pc, I will have no problems on server machine. Is there a minimal example of recv no blocking?
-
Yes, I'm testing clients with server on the same machine. My goal is: if the system works fine on my pc, I will have no problems on server machine. Is there a minimal example of recv no blocking?
If you only have one physical machine, I suggest installing Windows in a virtual machine, and running one of the tasks there. That would ensure that the platforms are separate, and would help you to track down the 100% CPU issue. IMO, a multithreaded blocking recv() implementation is conceptually much simpler than a non-blocking recv() implementation.
If you have an important point to make, don't try to be subtle or clever. Use a pile driver. Hit the point once. Then come back and hit it again. Then hit it a third time - a tremendous whack. --Winston Churchill
-
If you only have one physical machine, I suggest installing Windows in a virtual machine, and running one of the tasks there. That would ensure that the platforms are separate, and would help you to track down the 100% CPU issue. IMO, a multithreaded blocking recv() implementation is conceptually much simpler than a non-blocking recv() implementation.
If you have an important point to make, don't try to be subtle or clever. Use a pile driver. Hit the point once. Then come back and hit it again. Then hit it a third time - a tremendous whack. --Winston Churchill
Daniel Pfeffer wrote:
IMO, a multithreaded blocking recv() implementation is conceptually much simpler than a non-blocking recv() implementation.
Yep, and it works really well... just be careful for hanging sockets in Linux.
Daniel Pfeffer wrote:
If you only have one physical machine, I suggest installing Windows in a virtual machine, and running one of the tasks there. That would ensure that the platforms are separate, and would help you to track down the 100% CPU issue.
This might actually yield the same result. The issue is the super low latency in the network when everything is co-located. Essentially if you're tx and rx run as fast as possible, it could take up all the CPU time available.
-
Quote:
The problem is, if my server must send data to client, it waits 5 seconds because recv function blocks thread.
As far as I know, the server's
send
is not blocked by the clientrecv
.Quote:
I try to launch another thread (for each client) for send operation, but cpu works 100%.
Waiting on
I/O
operations should not consumeCPU
, there's probably a flawn in your code.CPallini wrote:
As far as I know, the server's
send
is not blocked by the clientrecv
.They're not, unless they're on the same thread... in which case everything is blocked by the recv().
-
Daniel Pfeffer wrote:
IMO, a multithreaded blocking recv() implementation is conceptually much simpler than a non-blocking recv() implementation.
Yep, and it works really well... just be careful for hanging sockets in Linux.
Daniel Pfeffer wrote:
If you only have one physical machine, I suggest installing Windows in a virtual machine, and running one of the tasks there. That would ensure that the platforms are separate, and would help you to track down the 100% CPU issue.
This might actually yield the same result. The issue is the super low latency in the network when everything is co-located. Essentially if you're tx and rx run as fast as possible, it could take up all the CPU time available.
Albert Holguin wrote:
This might actually yield the same result. The issue is the super low latency in the network when everything is co-located. Essentially if you're tx and rx run as fast as possible, it could take up all the CPU time available
I thought the problem was an optimization in the network stack that gave special treatment to packets sent to the local IP address (127.0.0.1 or the network address). In this case, the data are simply copied to the recv() buffer, without any thread switches etc. The network drivers provided by the virtual machine monitor may not be optimized to such an extent. While it is possible to detect (and optimize for) packets sent to/from the host O/S, there would be no point in it as the emulation would be less close to a separate machine. I suspect that the virtual machine monitor emulates the physical network card driver, leaving the rest of the network stack untouched.
If you have an important point to make, don't try to be subtle or clever. Use a pile driver. Hit the point once. Then come back and hit it again. Then hit it a third time - a tremendous whack. --Winston Churchill
-
Albert Holguin wrote:
This might actually yield the same result. The issue is the super low latency in the network when everything is co-located. Essentially if you're tx and rx run as fast as possible, it could take up all the CPU time available
I thought the problem was an optimization in the network stack that gave special treatment to packets sent to the local IP address (127.0.0.1 or the network address). In this case, the data are simply copied to the recv() buffer, without any thread switches etc. The network drivers provided by the virtual machine monitor may not be optimized to such an extent. While it is possible to detect (and optimize for) packets sent to/from the host O/S, there would be no point in it as the emulation would be less close to a separate machine. I suspect that the virtual machine monitor emulates the physical network card driver, leaving the rest of the network stack untouched.
If you have an important point to make, don't try to be subtle or clever. Use a pile driver. Hit the point once. Then come back and hit it again. Then hit it a third time - a tremendous whack. --Winston Churchill
:doh: perhaps.... you would really only run into this issue in certain cases (where it would actually be a problem), where the data source isn't throttled and infinite (i.e. mostly test cases)