Help with DataTable update
-
Hello, I am trying to speed up the update of a DataTable. The short story is that I have a DataTable 'A' that has approximately 10-15 columns of information, depending on the data loaded. I have a couple of extra columns that get added to the table at runtime and I need to populate those columns based on the values in another table, 'B', that has 3 columns: an ItemID column for matching the ItemId column in table A and two other columns. I could have as many as 10K rows in DataTable A and the number of rows in DataTable B could vary. It is possible, too, that DataTable B may have rows where the ItemId doesn't match any ItemId in DataTable A, and it those instances, I want to ignore the rows. My current solution involves something like this:
DataColumn aColNew1 = tableA.Columns["New1"]; DataColumn aColNew2 = tableA.Columns["New2"]; DataColumn bColNew1 = tableB.Columns["New1"]; DataColumn bColNew2 = tableB.Columns["New2"]; DataColumn bColItemId = tableB.Columns["ItemId"]; DataRow[] bRows = tableB.Select(); for (int i = 0; i < bRows.Length; ++i) { DataRow[] aRows = tableA.Select("ItemId = '" + bRows[i][bColID].ToString() + "'"); if (aRows.Length > 0) { aRows[0][aColNew1] = bRows[i][bColNew1]; aRows[0][aColNew2] = bRows[i][bColNew2]; } }
Now, I'm am pretty new to C#, so this might not be the most efficient way to do this. I hope it isn't, because it is taking a considerable amount of time (close to 40 seconds) to execute this when tables A and B have around 10K rows. Any suggestions on how I can speed this up would be much appreciated. Thanks, Matt -
Hello, I am trying to speed up the update of a DataTable. The short story is that I have a DataTable 'A' that has approximately 10-15 columns of information, depending on the data loaded. I have a couple of extra columns that get added to the table at runtime and I need to populate those columns based on the values in another table, 'B', that has 3 columns: an ItemID column for matching the ItemId column in table A and two other columns. I could have as many as 10K rows in DataTable A and the number of rows in DataTable B could vary. It is possible, too, that DataTable B may have rows where the ItemId doesn't match any ItemId in DataTable A, and it those instances, I want to ignore the rows. My current solution involves something like this:
DataColumn aColNew1 = tableA.Columns["New1"]; DataColumn aColNew2 = tableA.Columns["New2"]; DataColumn bColNew1 = tableB.Columns["New1"]; DataColumn bColNew2 = tableB.Columns["New2"]; DataColumn bColItemId = tableB.Columns["ItemId"]; DataRow[] bRows = tableB.Select(); for (int i = 0; i < bRows.Length; ++i) { DataRow[] aRows = tableA.Select("ItemId = '" + bRows[i][bColID].ToString() + "'"); if (aRows.Length > 0) { aRows[0][aColNew1] = bRows[i][bColNew1]; aRows[0][aColNew2] = bRows[i][bColNew2]; } }
Now, I'm am pretty new to C#, so this might not be the most efficient way to do this. I hope it isn't, because it is taking a considerable amount of time (close to 40 seconds) to execute this when tables A and B have around 10K rows. Any suggestions on how I can speed this up would be much appreciated. Thanks, MattThe Select() calls were killing me. It was actually faster to just select everything, sorted by ItemId and iterate through the two arrays using a while-loop nested within a for-loop, bailing out of the while whenever I got a match or when the Id I was comparing was greater than the current one in the while loop. Ended up cutting down the processing time to maybe a second or so. Should have realized how inefficient the original algorithm was. :(