Threads in WCF.
-
Hi All, I have a WCF serivce which will be called by many clients at the same time. So i set concurrency mode as multiple. So its working fine. In my database i have set time limit for processing each request. So if a particular request is not executing with in that time limit i need to kill(abort) that thread and that request should be given to a new thread. Can any one give me an idea on this that how can i do this? Thanks Lijo.
-
Hi All, I have a WCF serivce which will be called by many clients at the same time. So i set concurrency mode as multiple. So its working fine. In my database i have set time limit for processing each request. So if a particular request is not executing with in that time limit i need to kill(abort) that thread and that request should be given to a new thread. Can any one give me an idea on this that how can i do this? Thanks Lijo.
I'm not sure you'd want to kill something because it's taking a long time, only to start it up again (from the beginning) on another thread. WCF should be able to handle concurrent requests for you without you having to do anything outside of some configuration (such as setting maxConnections in the binding at the server).
-
I'm not sure you'd want to kill something because it's taking a long time, only to start it up again (from the beginning) on another thread. WCF should be able to handle concurrent requests for you without you having to do anything outside of some configuration (such as setting maxConnections in the binding at the server).
Thanks a lot for your reply. ok Thats fine. I can start it up again. But what will happen to the actual thread? I set Concurrency mode as mulitiple and max concurrent call is 100. So at a time this service will handle 100 requests right? For example if the 25th request is taking more time than i set in my database i need to kill that thread which is handling that 25th request. Is there any way to do this? i think you are able to understand my issue.. As per your suggestion if i start that 25th request again what will happen to the actual thread which handled that 25th request? Could you please clarify this? Thanks Lijo.
-
Thanks a lot for your reply. ok Thats fine. I can start it up again. But what will happen to the actual thread? I set Concurrency mode as mulitiple and max concurrent call is 100. So at a time this service will handle 100 requests right? For example if the 25th request is taking more time than i set in my database i need to kill that thread which is handling that 25th request. Is there any way to do this? i think you are able to understand my issue.. As per your suggestion if i start that 25th request again what will happen to the actual thread which handled that 25th request? Could you please clarify this? Thanks Lijo.
As I see it, there are two ways to handle this: Set a timeout on the command. When the a request exceeds that timeout it will throw and exception. You can handle that exception in your service code and maybe retry it. Or you could set the timeout on the command and when the command times out, it will throw an exception, you can catch this at the client and maybe retry the request. In the first case you're still using the same thread the whole time. In the second case WCF is going to put the thread back into it's pool. But what it really comes down to is you don't need to worry about what happens to the thread, WCF is going to deal with it either way. I really don't think you should be restarting this process at all. If a database request takes on average 1 second, and your command timeout is 1 minute. If you hit that timeout, you have a problem and should throw and exception, not just hide the exception and try again. If you have one particular database calls that averages 50 seconds, you should just up your command timeout. If a process takes on average 50 seconds, and let's say that due to server load it's going to take a minute and 10 seconds, but it times out at 1 minute, if you restart it you're using 1 minute 50 seconds of server time when you could have used 1 minute 10 seconds. Not to mention that if it's server load that slows your database call down, you're now increasing the load with the second call. I've seen someone put code in to call a stored procedure a second or third time if it errored due to deadlock or timeout (usually due to blocking locks), and it generally just made stuff worse. Whenever it did this, the problem would cascade across all of the users. We'd tell everyone to stop what they were doing for 2 minutes and everything would go back to normal.