As I see it, there are two ways to handle this: Set a timeout on the command. When the a request exceeds that timeout it will throw and exception. You can handle that exception in your service code and maybe retry it. Or you could set the timeout on the command and when the command times out, it will throw an exception, you can catch this at the client and maybe retry the request. In the first case you're still using the same thread the whole time. In the second case WCF is going to put the thread back into it's pool. But what it really comes down to is you don't need to worry about what happens to the thread, WCF is going to deal with it either way. I really don't think you should be restarting this process at all. If a database request takes on average 1 second, and your command timeout is 1 minute. If you hit that timeout, you have a problem and should throw and exception, not just hide the exception and try again. If you have one particular database calls that averages 50 seconds, you should just up your command timeout. If a process takes on average 50 seconds, and let's say that due to server load it's going to take a minute and 10 seconds, but it times out at 1 minute, if you restart it you're using 1 minute 50 seconds of server time when you could have used 1 minute 10 seconds. Not to mention that if it's server load that slows your database call down, you're now increasing the load with the second call. I've seen someone put code in to call a stored procedure a second or third time if it errored due to deadlock or timeout (usually due to blocking locks), and it generally just made stuff worse. Whenever it did this, the problem would cascade across all of the users. We'd tell everyone to stop what they were doing for 2 minutes and everything would go back to normal.