If I were you I'd consider writing/using a thread-pool. have the first thread start running with a job of downloading the first page. then perform your default processing on it which is to parse for links . then, fill the thread-pool's job-list with jobs per each url, and so on. add a depth count to each job so that you can limit them. before a thread fills up the thread-pool's job-list with more urls to investigate, have it post some data into another list the doc/view is incharge of so that the user can get a sense of 'whats happening'. p.s. - a thread pool is a simple structure of X threads that wait for 'jobs' to handle. they share a single 'list of pending jobs' and whenever a job exists in the list, an event is set and the first thread to catch it will be the one to remove that job from the list and handle it. it requiers some synchronization (locking list, waiting for input, waiting for all threads to destroy themselves, etc) but if you do it generalized it's worth it.