I recently had a similar problem and gained some speed with a 2 step change: parallelize the
foreach (string f in files)
and then ditch the file streaming in favor of File.ReadAllLines(...). Basically the disk was my bottleneck and it performed better doing a bulk reads of data instead of streaming line by line; after each bulk read I had a chunk of data in memory that I could process bound only by the limitations of my CPU. This does, of course, assume that you have some memory to spare because you're reading entire files in at a time. There was also a limit though as to how many concurrent file reads I could throw at my disk, after a certain amount the performance dropped significantly so I had to throttle back how many parallel file reads my app could do. I did some trial and error to find the best number, YMMV. That would also require you to redo your database access though since you probably don't want to be doing that from multiple threads, but that kind of goes along with some of the other suggestions you've received. Collecting that data in memory and do a bulk insert later on instead of individual inserts would make a big difference, too.
:badger: