GC-Like Slowdown
-
Does the framework perform any non-garbage-collector operations that slow down applications at time scales similar to thsoe of garbage collections? I ask because in my C# code, a certain operation has a measured average case time of a few microseconds, but once in a while one of these operations takes 5000 times longer than the average. This does not appear to be attributable to varying latency in the memory hierarchy or to variations in the code path followed during the operation. Also, I have verified that no garbage collections occur during these long operations by verifying that GC.CollectionCount did not change during the operation. My timing measurements measure real time, so a factor of 5000 slowdown could easily be due to the operating system, but I thought I would ask here to see if there are any other operations going on in the framework too. (I am working on a hash table implementation for a real time system; I am interested in the worst-case times for add, remove, and retrieve operations.) By the way.. Is there a convenient way to time a block of code without using real time measurements such as with WIN32 API calls to QueryPerformanceCounter and the like? It would be especially nice if the data were accessible from within the program so that the program could report the data and compute auxiliary statistics, instead of having to rely on extra steps that are usually required when using a profiler.
-
Does the framework perform any non-garbage-collector operations that slow down applications at time scales similar to thsoe of garbage collections? I ask because in my C# code, a certain operation has a measured average case time of a few microseconds, but once in a while one of these operations takes 5000 times longer than the average. This does not appear to be attributable to varying latency in the memory hierarchy or to variations in the code path followed during the operation. Also, I have verified that no garbage collections occur during these long operations by verifying that GC.CollectionCount did not change during the operation. My timing measurements measure real time, so a factor of 5000 slowdown could easily be due to the operating system, but I thought I would ask here to see if there are any other operations going on in the framework too. (I am working on a hash table implementation for a real time system; I am interested in the worst-case times for add, remove, and retrieve operations.) By the way.. Is there a convenient way to time a block of code without using real time measurements such as with WIN32 API calls to QueryPerformanceCounter and the like? It would be especially nice if the data were accessible from within the program so that the program could report the data and compute auxiliary statistics, instead of having to rely on extra steps that are usually required when using a profiler.
Exceptions can do that. I suggest you attach a profiler, and see what the code does in that bad case.
**
xacc.ide-0.2.0.57 - now with C# 2.0 parser and seamless VS2005 solution support!
**
-
Exceptions can do that. I suggest you attach a profiler, and see what the code does in that bad case.
**
xacc.ide-0.2.0.57 - now with C# 2.0 parser and seamless VS2005 solution support!
**
The slow operations are rare, at most 1 in 10,000 operations, probably less frequent than that. How would I track this down with a profiler? Given the nature of the code it does not seem likely that exceptions are being thrown in its midst. There are some calls to the base class library but these use things like List and do not seem likely to involve exceptions.
-
Does the framework perform any non-garbage-collector operations that slow down applications at time scales similar to thsoe of garbage collections? I ask because in my C# code, a certain operation has a measured average case time of a few microseconds, but once in a while one of these operations takes 5000 times longer than the average. This does not appear to be attributable to varying latency in the memory hierarchy or to variations in the code path followed during the operation. Also, I have verified that no garbage collections occur during these long operations by verifying that GC.CollectionCount did not change during the operation. My timing measurements measure real time, so a factor of 5000 slowdown could easily be due to the operating system, but I thought I would ask here to see if there are any other operations going on in the framework too. (I am working on a hash table implementation for a real time system; I am interested in the worst-case times for add, remove, and retrieve operations.) By the way.. Is there a convenient way to time a block of code without using real time measurements such as with WIN32 API calls to QueryPerformanceCounter and the like? It would be especially nice if the data were accessible from within the program so that the program could report the data and compute auxiliary statistics, instead of having to rely on extra steps that are usually required when using a profiler.
A s h wrote:
I have verified that no garbage collections occur during these long operations
In this case, it could be linked to Windows memory management. I do not know if you allocate large blocks of memory, or a lot of small blocks. When you ask for memory, windows will try not to allocate it. If you actually use the memory, that will cause a page fault and windows will either create a new page, or swap one from the disk (depending on the situation). If you want your system to be under tight constraints, you might have to (try to) manage the memory yourself.
-
A s h wrote:
I have verified that no garbage collections occur during these long operations
In this case, it could be linked to Windows memory management. I do not know if you allocate large blocks of memory, or a lot of small blocks. When you ask for memory, windows will try not to allocate it. If you actually use the memory, that will cause a page fault and windows will either create a new page, or swap one from the disk (depending on the situation). If you want your system to be under tight constraints, you might have to (try to) manage the memory yourself.
The class does small to medium size allocations, generally 32-1000 bytes at a time. The class implementation does a bit of custom memory management in the sense that it maintains a pool of objects for one type of commonly used object. The class runs completely in managed code and I am not willing to change that - this is a general-purpose library-style implementation of a data structure (with essentially the same interface as ystems.Collection.Generic.Dictionary) that is intended for use within managed or unmanaged programs. The test framework does use the WIN32 API to do timing measurements. That is the only part of the test framework that runs in unmanaged code. The amount of time used during these slowdowns is on the order of .02 seconds. Sorry for not mentioning this explicitly in the original post. This amount of time seems orders of magnitude too large to be accounted for by page faults. What do you think?
-
The class does small to medium size allocations, generally 32-1000 bytes at a time. The class implementation does a bit of custom memory management in the sense that it maintains a pool of objects for one type of commonly used object. The class runs completely in managed code and I am not willing to change that - this is a general-purpose library-style implementation of a data structure (with essentially the same interface as ystems.Collection.Generic.Dictionary) that is intended for use within managed or unmanaged programs. The test framework does use the WIN32 API to do timing measurements. That is the only part of the test framework that runs in unmanaged code. The amount of time used during these slowdowns is on the order of .02 seconds. Sorry for not mentioning this explicitly in the original post. This amount of time seems orders of magnitude too large to be accounted for by page faults. What do you think?
You should take a look at: http://blogs.msdn.com/ricom/[^] Also you could search for "GC mid-life crisis", that would give you pointers to more details on how the Garbage collector works, and interactions with the OS.
-
You should take a look at: http://blogs.msdn.com/ricom/[^] Also you could search for "GC mid-life crisis", that would give you pointers to more details on how the Garbage collector works, and interactions with the OS.
Thanks. I think I know what the slowdown is due to - it is probably due to JITting. Now, if I knew how to force the compiler to JIT everything without having to do it in an ad-hoc way like running the benchmark twice.. -- modified at 17:34 Tuesday 5th December, 2006