Is there a process handle limit in 64-bit Windows Web Server 2008?
-
First of all, I know the standard answer to this question is "If you need to know, you're doing it wrong", but I really do need to know! Long story short, I have a caching mechanism in a web service (IIS 6) that can cache thousands of items in memory and return them almost instantly, without requiring a trip to the main datastore. However, when the cached items expire they need to be refetched from the datastore, but only one caller can be executing the 'fetch' code at a time (or else it runs into all sorts of horrible race conditions and deadlocks). All the other callers must either wait for the first thread to update the cache (if the item is not found in the cache) or simply return the cached item while it is being updated in the background. Each data item contains its own ManualResetEvent, which it uses to block while the cached item is updated. By giving each dataitem its own ManualResetEvent, I can allow other data items to be fetched without blocking them - the only calls that are blocked are those for the *exact* item that has expired. My concern is that I will run out of handles for the data items, since each item has its own ManualResetEvent, and there could potentially be tens of thousands of data items. Do I need to worry about this, or should I try to come up with an alternate implementation? Perhaps I could use a pool of ManualResetEvents that can be assigned to items as needed, similar to the way database connection pooling works? This is probably a "better" implementation, but it is more complex and harder to debug... Thanks for your help :)
The StartPage Randomizer - The Windows Cheerleader - Twitter
modified on Tuesday, April 14, 2009 4:14 PM
-
First of all, I know the standard answer to this question is "If you need to know, you're doing it wrong", but I really do need to know! Long story short, I have a caching mechanism in a web service (IIS 6) that can cache thousands of items in memory and return them almost instantly, without requiring a trip to the main datastore. However, when the cached items expire they need to be refetched from the datastore, but only one caller can be executing the 'fetch' code at a time (or else it runs into all sorts of horrible race conditions and deadlocks). All the other callers must either wait for the first thread to update the cache (if the item is not found in the cache) or simply return the cached item while it is being updated in the background. Each data item contains its own ManualResetEvent, which it uses to block while the cached item is updated. By giving each dataitem its own ManualResetEvent, I can allow other data items to be fetched without blocking them - the only calls that are blocked are those for the *exact* item that has expired. My concern is that I will run out of handles for the data items, since each item has its own ManualResetEvent, and there could potentially be tens of thousands of data items. Do I need to worry about this, or should I try to come up with an alternate implementation? Perhaps I could use a pool of ManualResetEvents that can be assigned to items as needed, similar to the way database connection pooling works? This is probably a "better" implementation, but it is more complex and harder to debug... Thanks for your help :)
The StartPage Randomizer - The Windows Cheerleader - Twitter
modified on Tuesday, April 14, 2009 4:14 PM
Miszou wrote:
If you need to know, you're doing it wrong
I'm afraid you proved it true once more. I don't know the number (I must be doing something right :) ), AFAIK it is a fixed number that depends on the Windows version and I expect it does not exceed a few thousand on a client version. As each caller may be waiting and causing one update at most, why not associate a ResetEvent with each caller (=each thread) instead of with each handler? You can use a Dictionary if you want an automatic association, and you can create the ResetEvent when an entry is not found; you could optionally provide a CreateEvent() method so the ResetEvent and DictionaryEntry can be created in advance. IMO this scheme looks simpler and more deterministic than a pooling scheme. :)
-
Miszou wrote:
If you need to know, you're doing it wrong
I'm afraid you proved it true once more. I don't know the number (I must be doing something right :) ), AFAIK it is a fixed number that depends on the Windows version and I expect it does not exceed a few thousand on a client version. As each caller may be waiting and causing one update at most, why not associate a ResetEvent with each caller (=each thread) instead of with each handler? You can use a Dictionary if you want an automatic association, and you can create the ResetEvent when an entry is not found; you could optionally provide a CreateEvent() method so the ResetEvent and DictionaryEntry can be created in advance. IMO this scheme looks simpler and more deterministic than a pooling scheme. :)
Luc 648011 wrote:
I'm afraid you proved it true once more.
Haha, yeah I was afraid of that! :laugh: I like your idea of associating the ResetEvent with the caller - as you say, it's a lot simpler than a pooling scheme and much safer than risking the handle limit. I think I've been looking at the forest for so long that I couldn't see the trees! Thanks for the help. :)
The StartPage Randomizer - The Windows Cheerleader - Twitter
-
First of all, I know the standard answer to this question is "If you need to know, you're doing it wrong", but I really do need to know! Long story short, I have a caching mechanism in a web service (IIS 6) that can cache thousands of items in memory and return them almost instantly, without requiring a trip to the main datastore. However, when the cached items expire they need to be refetched from the datastore, but only one caller can be executing the 'fetch' code at a time (or else it runs into all sorts of horrible race conditions and deadlocks). All the other callers must either wait for the first thread to update the cache (if the item is not found in the cache) or simply return the cached item while it is being updated in the background. Each data item contains its own ManualResetEvent, which it uses to block while the cached item is updated. By giving each dataitem its own ManualResetEvent, I can allow other data items to be fetched without blocking them - the only calls that are blocked are those for the *exact* item that has expired. My concern is that I will run out of handles for the data items, since each item has its own ManualResetEvent, and there could potentially be tens of thousands of data items. Do I need to worry about this, or should I try to come up with an alternate implementation? Perhaps I could use a pool of ManualResetEvents that can be assigned to items as needed, similar to the way database connection pooling works? This is probably a "better" implementation, but it is more complex and harder to debug... Thanks for your help :)
The StartPage Randomizer - The Windows Cheerleader - Twitter
modified on Tuesday, April 14, 2009 4:14 PM