Well, I didn't want to go that way, but I'm going to be out of options very soon, it seems. Thanks for the replies.
clayman87
Posts
-
Maximum UDP transmit rate is just 50 MB/s and with 100% CPU usage!?? -
Maximum UDP transmit rate is just 50 MB/s and with 100% CPU usage!??Yes, a third party is inherently needed to establish a connection (STUN server), but relaying all the traffic through it is just not an option. As I know of no way to do this separately (handshake vs. traffic) with TCP, I was forced to switch to UDP.
-
Maximum UDP transmit rate is just 50 MB/s and with 100% CPU usage!??As far as I know, there's no way to establish a TCP connection between two passive (behind NAT) clients, which is partly what I want to accomplish.
-
Maximum UDP transmit rate is just 50 MB/s and with 100% CPU usage!??Let me rephrase my question. MSDN states: "If no buffer space is available within the transport system to hold the data to be transmitted, sendto will block unless the socket has been placed in a nonblocking mode". In the protocol stack, at which layer is this buffer space located?
-
Maximum UDP transmit rate is just 50 MB/s and with 100% CPU usage!??It is on a PCI-E 1.0 lane, which is 250 MB/s, it should be, and it is enough on Ubuntu 8.10 (at least for 1 gigabit). The question is, what causes the 100% CPU load on Windows XP even at 60MB/s, and on Ubuntu at 130 MB/s.
-
How can a referece to integer constant change value in a function?Well, "x" an indeed an "integer constant" from the our point of view: by taking a constant reference, we obligate ourselves not to change the value of "x", we can only read it. However, it does not mean that the value of x has to be constant, it's just that _we_ can't modify the respective memory area through this variable named "x". If it's not a constant actually, then anyone else, or even us, through an other alias can modify the memory area (in this example through the non-const reference to the array), the modified value of course reflecting also when accessing the value of x. So, in this example, "x" is just an alias to the first element of the "arr" array marked const (=you can't modify the value through me), but since the function takes the array by reference (thereby obtaining an alias for the original "arr" array) and changes its first element (remember, the array is a non-const reference, it can be changed), it actually changes the first element of the original, global "arr" array, but "x" is still just an alias for this, thereby its value must be 2 + 2 = 4.
-
Maximum UDP transmit rate is just 50 MB/s and with 100% CPU usage!??I took your advice, and did some testing in Wireshark. Based on "Identification" field in the IP header, which seems to be assigned continuously, everything looks fine down to bottom of the IP layer, but then Wireshark only registers every fifth or so packets as Ethernet frames, which correlaltes well with the fact that a ~60 MB/s datastreams goes down the pipe with a capacity of a 12.5 MB/s (100mbit). As I've learned, this layer is usually implemented in the network card driver. A poorly written driver software of interface specification may be the cause of this "no block" problem. As for the transfer rate, I'm still puzzled.
-
Maximum UDP transmit rate is just 50 MB/s and with 100% CPU usage!??Thank you for your question. Test machine was a Intel P35 chipset-based system, Core 2 Duo E6400 2133 MHz CPU, 2GB of RAM, Gigabyte P35-DS3 motherboard.
-
Maximum UDP transmit rate is just 50 MB/s and with 100% CPU usage!??Yes, I do, but I sent the program to some fearless friends without firewall, and they experienced similar results on Windows XP. Meanwhile, I tested the code on other OS-es as well. So the measurements are for 1400 byte UDP daragrams: - Windows XP SP 3 (w/ fw): 66 MB/s, 100% CPU-usage/core (mostly kernel) - Windows 7 RC1 (no firewall): 11.5 MB/s, but only 20% CPU-usage/core, seems to take interface capacity into account (which is, after all, what I would expect in the first place) - Ubuntu 8.10 on VMware with Tools: 120 MB/s, 100% CPU-usage/core (mostly kernel) In contrast, I've been able to push data through a loopback TCP connection at 330 MB/s on WindowsXP/CLR, and nearly 500MB/s on Ubuntu.
modified on Tuesday, July 7, 2009 11:13 PM
-
Maximum UDP transmit rate is just 50 MB/s and with 100% CPU usage!??Hi! I've been experimenting with custom flow control techniques for bulk transfers over UDP when I discovered something very weird. Please take a look at the following code: it just sends UDP datagrams of size 1400 bytes in an endless loop to some IP address.
SOCKET sock = socket( AF_INET, SOCK_DGRAM, IPPROTO_UDP );
sockaddr_in targetAddr;
targetAddr.sin_addr.s_addr = inet_addr( "...some IP..." );
targetAddr.sin_family = AF_INET;
targetAddr.sin_port = htons( 1337 );char arr[1400];
long long sent = 0;
while( !kbhit() )
{
for( int i=0; i<1000; ++i )
{
long res;
if( (res=sendto( sock, arr, 1400, 0, (sockaddr*)&targetAddr, sizeof( targetAddr ) )) == SOCKET_ERROR )
{
printf("Error: %d\n", WSAGetLastError() );
return -1;
}
sent += res;
}printf("\\r%d MBs sent", (long)(sent >> 20) );
}
When I run the program, every sendto() call succeeds and reports having sent 1400 bytes of data. The interesting thing is that I get a transfer rate of just about 50 MB/s but 100% CPU usage on one core (mostly kernel-mode). Now: -- my computer is connected to an Ethernet 100BaseTX network, which obviously does not support the transfer rate above, so datagrams get already lost before even reaching the network. Why does sendto() then reports having sent the data, what is more, why does it not block when I/O buffers fills up? (The documentation says that it should.) -- how on Earth can someone utilize the full potential of a - say - Gigabit Ethernet network if just sending data even at half of its capacity already causes maximum CPU load? So, what am I doing wrong, why on Earth does sendto() takes so long? Any suggestion is very welcome, thanks, clayman P.S.: I've run a test with 140 bytes of data each time, and the transfer rate basically dropped to 5 MB/s -- so the _number_ of sendto() calls seems to be the bottle-neck.
-
The breakpoint will not currently be hit - Invalid file line! ???Ms VC++ .Net 2003 tells me this just out of nothing for (some) breakpoints, at some build, however it worked totally correct at the last build. Someone recommended me deleting build folder, done that, deleting project.suo, done that, debug and non-incremental RE-build, done, turning all optimalisation options OFF, done. Then what the hell causes this stupid error or whatever?
-
is there platform-independent FillMemory function in C/C++!?and, of course, is there any way to put not bytes but ints (4bytes) after each other? cause memset seems suitable only for single bytes...
-
is there platform-independent FillMemory function in C/C++!?Is it elegant to use 'memset' in a full C++ environment?
-
is there platform-independent FillMemory function in C/C++!?Hi! I'm looking for some platform-independent C/C++ function which would be to pre-fill a large memory area with some value, just like FillMemory in win32 does. Does such a function exist, or do I have to make my own in assembly?
-
adding your own class's overrideables to the overrideables page in vs.NETHi! Is there any way to expand the classwizard's override knowledge for a class? I mean the virtual member functions on the (when classview) Properties->overrideables page. So for example there're the OnAccept, OnReceive, OnConnect func already put into that overrides page for CAsyncSocket. I define a new func, like OnDataArrive(), then how can I make it appear on the that page in addition? Thx for your help in advance, clayman
-
how can I add my own funcs to the overrides page in .NETHi! I'd like to make my overrideable functions appear on the Properties->Overrides page of the vs enviroment, therefore offering users of my classes a more convenient overriding. What should I do to achieve this? Thanks in forward
-
UDP listen sockets don't get any data behind Firewall (ZoneAlarm)Hi! I've been trying to figure this bug out for a week, w/out any progress, so I'd like to ask for you help, maybe there's a well-known workaround. :) I have an UDP listen socket, which is to accept incoming data from the INET. The problem arises when I try to do this behind ZoneAlarm( so far I've tried this fwall ). I allow every actions (act as a server even for Inet), but when I switch it on, my socket does not receive a single byte of data, although it works properly w/out firewall. I've also given it a try with the original MFC CAsyncSocket UDP sample, and viola, it worked, even behind firewall. So I started to compare the two codes, but it turned out that they were exactly the same (the networking part of course). I've also checked the ZA settings and they are exactly the same too. I create the socket w/ almost the default values: just
CAsyncSocketDerivedClass sock; sock.Create( 0, SOCK_DGRAM );
with this code it gets data without ZA but don't get a single byte behind ZA. Please, heeelp! Thanks in advance! -
Limiting dekstop(maximized windows') sizeHi! The idea came from ICQ, where if you dock it to the sides it limits the size of the desktop the same way as i.e. the tray does. So I'd like to know how to restrict the desktop, and therefore the size of the maximized windows', is there even any way, a func, or something... Thx in forward
-
Sorting 'std::list' -s filled with class-pointers leads to a compile errorYes, of course. The solution is quite simple, almost trivial, but only if you think "with a microsofter's head". So the secret is that you, instead of deriving your sorter class from the std::greater<...>, have to derive the std::greater from your CYourSorterClass, which actually doesn't exists. to be clear: so just extend std with the following lines:
namespace std { template<> struct greater<CYourClass*> : public binary_function<CYourClass*, CYourClass*, bool> { bool operator()(const CYourClass*& x, const CYourClass*& y) const { // if you don't have the class' operator< overloaded return x->sortbyvalue < y->sortbyvalue; // if you do have return x < y; } }; }
then sort your list just the same way as it was a 'int' etc...:nodes.sort( std::greater<CYourClass*> ( ) );
that's it. I hope I could help. -
changing CPropertySheet's bk colorHi, is there a simple way to set a PropSheet's background color? and, how can you set that for a CTabCtrl??? thx