Dynamic Arrays
-
I haven't done .NET programming in a while so I'm not sure what the best approach is for this. I'm using the .NET Framework 4.0 and the BigInteger class. Before the end of my loop, I have:
Array.Resize(ref numbers, numbers_temp.Length);
numbers_temp.CopyTo(numbers, 0);Is there a better approach than this? I have an array of BigInteger that I add and delete numbers from at various times in the loop. Towards the end of my loop, this array grows quite a bit. It seems such a waste to create another array to copy all members from one to the other and then destroy the older one at every loop iteration. Thanks!
-
I haven't done .NET programming in a while so I'm not sure what the best approach is for this. I'm using the .NET Framework 4.0 and the BigInteger class. Before the end of my loop, I have:
Array.Resize(ref numbers, numbers_temp.Length);
numbers_temp.CopyTo(numbers, 0);Is there a better approach than this? I have an array of BigInteger that I add and delete numbers from at various times in the loop. Towards the end of my loop, this array grows quite a bit. It seems such a waste to create another array to copy all members from one to the other and then destroy the older one at every loop iteration. Thanks!
I use a generic list for this. In the rare case when I need to return a real array I use the ToArray method. Remember from CS If you resize a dynamic array a contiguous chunk of unassigned memory has to be found and then the entire array copied over. Thus, a generic list will be more efficient in all cases.
Need custom software developed? I do custom programming based primarily on MS tools with an emphasis on C# development and consulting. I also do Android Programming as I find it a refreshing break from the MS. "And they, since they Were not the one dead, turned to their affairs" -- Robert Frost
-
I haven't done .NET programming in a while so I'm not sure what the best approach is for this. I'm using the .NET Framework 4.0 and the BigInteger class. Before the end of my loop, I have:
Array.Resize(ref numbers, numbers_temp.Length);
numbers_temp.CopyTo(numbers, 0);Is there a better approach than this? I have an array of BigInteger that I add and delete numbers from at various times in the loop. Towards the end of my loop, this array grows quite a bit. It seems such a waste to create another array to copy all members from one to the other and then destroy the older one at every loop iteration. Thanks!
-
I use a generic list for this. In the rare case when I need to return a real array I use the ToArray method. Remember from CS If you resize a dynamic array a contiguous chunk of unassigned memory has to be found and then the entire array copied over. Thus, a generic list will be more efficient in all cases.
Need custom software developed? I do custom programming based primarily on MS tools with an emphasis on C# development and consulting. I also do Android Programming as I find it a refreshing break from the MS. "And they, since they Were not the one dead, turned to their affairs" -- Robert Frost
As in?
List<BigInteger> numbers = new List<BigInteger>();
Thanks!
-
I haven't done .NET programming in a while so I'm not sure what the best approach is for this. I'm using the .NET Framework 4.0 and the BigInteger class. Before the end of my loop, I have:
Array.Resize(ref numbers, numbers_temp.Length);
numbers_temp.CopyTo(numbers, 0);Is there a better approach than this? I have an array of BigInteger that I add and delete numbers from at various times in the loop. Towards the end of my loop, this array grows quite a bit. It seems such a waste to create another array to copy all members from one to the other and then destroy the older one at every loop iteration. Thanks!
You could use a
List<BigInteger>
Or do what the List does, double the size every time you need more room. Or, since it seems thatnumbers_temp
is an array, you are already done. Just changenumbers
to point tonumbers_temp
and allocate a newnumbers_temp
array next time. But you didn't give much information, so maybe I could not properly answer your question. -
Well, a good way for dynamic arrays is to use the System.Collections.Generic.List; or for keyed stuff a System.Collections.Generic.Dictionary.
List<int> list = new List<int>();
list.Add(1);
list.Add(3);
list.Add(5);int[] intArray = list.ToArray();
Thanks, I got that.
-
You could use a
List<BigInteger>
Or do what the List does, double the size every time you need more room. Or, since it seems thatnumbers_temp
is an array, you are already done. Just changenumbers
to point tonumbers_temp
and allocate a newnumbers_temp
array next time. But you didn't give much information, so maybe I could not properly answer your question.Thanks! I'll probably end up using the scaling. My list array increases by roughly a factor of three at each iteration.
-
You could use a
List<BigInteger>
Or do what the List does, double the size every time you need more room. Or, since it seems thatnumbers_temp
is an array, you are already done. Just changenumbers
to point tonumbers_temp
and allocate a newnumbers_temp
array next time. But you didn't give much information, so maybe I could not properly answer your question.harold aptroot wrote:
Or, since it seems that numbers_temp is an array, you are already done. Just change numbers to point to numbers_temp and allocate a new numbers_temp array next time.
BTW, this is what I'm doing. I'm trying to get rid of this.
-
harold aptroot wrote:
Or, since it seems that numbers_temp is an array, you are already done. Just change numbers to point to numbers_temp and allocate a new numbers_temp array next time.
BTW, this is what I'm doing. I'm trying to get rid of this.
-
What? In my code? That's how the example I saw had it. Like I said, it's been a while since I've done .NET programming. But I think I'll switch to the list since it seems better suited than this.
-
What? In my code? That's how the example I saw had it. Like I said, it's been a while since I've done .NET programming. But I think I'll switch to the list since it seems better suited than this.
-
Like this:
numbers_temp = new BigInteger[some number you apparently know];
// fill the array..
numbers = numbers_temp;
// repeat until finishedYes, but I don't know what the size is in the loop and I'm using the Array.Resize to make it grow. That seems to be slow and inefficient. I would rather only work with one array since all I'm doing with the temp one is using it to recreate the actual array when I create a new copy with a bigger size. The generic list would take care of all that I hope.
-
Yes, but I don't know what the size is in the loop and I'm using the Array.Resize to make it grow. That seems to be slow and inefficient. I would rather only work with one array since all I'm doing with the temp one is using it to recreate the actual array when I create a new copy with a bigger size. The generic list would take care of all that I hope.
-
Is there a maximum number of elements that could be produced in the loop known in advance?
Nope, but my estimate says it's bigger than a long. :)
-
Nope, but my estimate says it's bigger than a long. :)
-
I haven't done .NET programming in a while so I'm not sure what the best approach is for this. I'm using the .NET Framework 4.0 and the BigInteger class. Before the end of my loop, I have:
Array.Resize(ref numbers, numbers_temp.Length);
numbers_temp.CopyTo(numbers, 0);Is there a better approach than this? I have an array of BigInteger that I add and delete numbers from at various times in the loop. Towards the end of my loop, this array grows quite a bit. It seems such a waste to create another array to copy all members from one to the other and then destroy the older one at every loop iteration. Thanks!
In all the languages I've coded, there seems to be a general rule of thumb when looking at linked lists versus arrays and dynamic arrays. Double Linked Lists are better at adding and removing items a lot - but overall searches and navigation is slower than navigating an array; where an array is much faster for navigating hands down making searching, sorting etc just faster. Where the dynamic arrays get slow is when adding items exceeds the "chunk" and the array needs to make a full copy of itself in a new memory location with a new empty chunk. By Chunk I mean the number of array elements that are allocated but not yet used.. and as you fill the array.. you eventually hit this limit and adding one more item will cause this "thunking" I'm talking about. (Look up on Wiki what thunking is... the definition kind of applies here). How .Net implements their dynamic arrays under the hood I am not certain but I would imagine it's not different by much as these are pretty standard data configurations in any system I've used thus far. The fact that .Net has both lists and arrays indicates this is probably spot on. It seems like everything in programming is a choise of the right tool for the job. Deciding can be tricky - especially when both ways apply. Good Luck!
Know way too many languages... master of none!
-
I haven't done .NET programming in a while so I'm not sure what the best approach is for this. I'm using the .NET Framework 4.0 and the BigInteger class. Before the end of my loop, I have:
Array.Resize(ref numbers, numbers_temp.Length);
numbers_temp.CopyTo(numbers, 0);Is there a better approach than this? I have an array of BigInteger that I add and delete numbers from at various times in the loop. Towards the end of my loop, this array grows quite a bit. It seems such a waste to create another array to copy all members from one to the other and then destroy the older one at every loop iteration. Thanks!
I created a data structure, SlimList, and wrote an article about it. Read that. A SlimList is like a .Net List, except it avoids copying any elements on the resize. So, a .Net List will double the size of the underlying array then copy the items over from the old array to the new array that is twice the size. A SlimList will just create an additional array that is the same size as the existing arrays combined, but will not copy any elements over. It does this by creating a "major" array to hold all the references to the "minor" arrays. All operations on on par with List with respect to big-O notation (e.g., an index operation takes O(1) time). However, as it is currently implemented, SlimList is a lot slower than List (about 600x slower in some cases). However, with a little assembly code and some other modifications (that I've outlined in the article), it could be made to be about 2x slower than List. Probably not worth it, which is why I didn't implement the optimizations myself, but it's an interesting data structure that does overcome the array copy issue that exists with current implementations of List. So I'd say look at SlimList to see what route you could take, but then use List because that's going to be the most efficient route if speed is important. Also, if you are going to create a very large list (as you indicated in one of your replies), then I recommend creating lists of lists, rather than a single list. The reason is that memory fragmentation and other limits may prevent you from creating a single contiguous chunk of memory above a certain size (say, 256MB, a limit I've run into before due to fragmentation). That also helps to remedy the array copy problem. The sublists can always be created with a known capacity(e.g., 10,000,000 elements). Then it would only be the "major" list that would grow, and it would not grow very often. You would then only require two index operations to access a BigInteger within your list of lists and you wouldn't hit the memory issues.
-
I created a data structure, SlimList, and wrote an article about it. Read that. A SlimList is like a .Net List, except it avoids copying any elements on the resize. So, a .Net List will double the size of the underlying array then copy the items over from the old array to the new array that is twice the size. A SlimList will just create an additional array that is the same size as the existing arrays combined, but will not copy any elements over. It does this by creating a "major" array to hold all the references to the "minor" arrays. All operations on on par with List with respect to big-O notation (e.g., an index operation takes O(1) time). However, as it is currently implemented, SlimList is a lot slower than List (about 600x slower in some cases). However, with a little assembly code and some other modifications (that I've outlined in the article), it could be made to be about 2x slower than List. Probably not worth it, which is why I didn't implement the optimizations myself, but it's an interesting data structure that does overcome the array copy issue that exists with current implementations of List. So I'd say look at SlimList to see what route you could take, but then use List because that's going to be the most efficient route if speed is important. Also, if you are going to create a very large list (as you indicated in one of your replies), then I recommend creating lists of lists, rather than a single list. The reason is that memory fragmentation and other limits may prevent you from creating a single contiguous chunk of memory above a certain size (say, 256MB, a limit I've run into before due to fragmentation). That also helps to remedy the array copy problem. The sublists can always be created with a known capacity(e.g., 10,000,000 elements). Then it would only be the "major" list that would grow, and it would not grow very often. You would then only require two index operations to access a BigInteger within your list of lists and you wouldn't hit the memory issues.
All I care about is speed. I've already implemented the list and it is several orders of magnitude faster.
-
I created a data structure, SlimList, and wrote an article about it. Read that. A SlimList is like a .Net List, except it avoids copying any elements on the resize. So, a .Net List will double the size of the underlying array then copy the items over from the old array to the new array that is twice the size. A SlimList will just create an additional array that is the same size as the existing arrays combined, but will not copy any elements over. It does this by creating a "major" array to hold all the references to the "minor" arrays. All operations on on par with List with respect to big-O notation (e.g., an index operation takes O(1) time). However, as it is currently implemented, SlimList is a lot slower than List (about 600x slower in some cases). However, with a little assembly code and some other modifications (that I've outlined in the article), it could be made to be about 2x slower than List. Probably not worth it, which is why I didn't implement the optimizations myself, but it's an interesting data structure that does overcome the array copy issue that exists with current implementations of List. So I'd say look at SlimList to see what route you could take, but then use List because that's going to be the most efficient route if speed is important. Also, if you are going to create a very large list (as you indicated in one of your replies), then I recommend creating lists of lists, rather than a single list. The reason is that memory fragmentation and other limits may prevent you from creating a single contiguous chunk of memory above a certain size (say, 256MB, a limit I've run into before due to fragmentation). That also helps to remedy the array copy problem. The sublists can always be created with a known capacity(e.g., 10,000,000 elements). Then it would only be the "major" list that would grow, and it would not grow very often. You would then only require two index operations to access a BigInteger within your list of lists and you wouldn't hit the memory issues.
aspdotnetdev wrote:
Also, if you are going to create a very large list (as you indicated in one of your replies), then I recommend creating lists of lists, rather than a single list. The reason is that memory fragmentation and other limits may prevent you from creating a single contiguous chunk of memory above a certain size (say, 256MB, a limit I've run into before due to fragmentation). That also helps to remedy the array copy problem. The sublists can always be created with a known capacity(e.g., 10,000,000 elements). Then it would only be the "major" list that would grow, and it would not grow very often. You would then only require two index operations to access a BigInteger within your list of lists and you wouldn't hit the memory issues.
I'm already over 4 million items in my list and it increases by an order of three at each loop. I'll have to think about this one since I expect it to grow much, much higher than this. Thanks!
-
I created a data structure, SlimList, and wrote an article about it. Read that. A SlimList is like a .Net List, except it avoids copying any elements on the resize. So, a .Net List will double the size of the underlying array then copy the items over from the old array to the new array that is twice the size. A SlimList will just create an additional array that is the same size as the existing arrays combined, but will not copy any elements over. It does this by creating a "major" array to hold all the references to the "minor" arrays. All operations on on par with List with respect to big-O notation (e.g., an index operation takes O(1) time). However, as it is currently implemented, SlimList is a lot slower than List (about 600x slower in some cases). However, with a little assembly code and some other modifications (that I've outlined in the article), it could be made to be about 2x slower than List. Probably not worth it, which is why I didn't implement the optimizations myself, but it's an interesting data structure that does overcome the array copy issue that exists with current implementations of List. So I'd say look at SlimList to see what route you could take, but then use List because that's going to be the most efficient route if speed is important. Also, if you are going to create a very large list (as you indicated in one of your replies), then I recommend creating lists of lists, rather than a single list. The reason is that memory fragmentation and other limits may prevent you from creating a single contiguous chunk of memory above a certain size (say, 256MB, a limit I've run into before due to fragmentation). That also helps to remedy the array copy problem. The sublists can always be created with a known capacity(e.g., 10,000,000 elements). Then it would only be the "major" list that would grow, and it would not grow very often. You would then only require two index operations to access a BigInteger within your list of lists and you wouldn't hit the memory issues.
aspdotnetdev wrote:
The reason is that memory fragmentation and other limits may prevent you from creating a single contiguous chunk of memory above a certain size (say, 256MB, a limit I've run into before due to fragmentation).
What would happen if you don't have a chunk large enough? Does it fail or wait?
aspdotnetdev wrote:
The sublists can always be created with a known capacity(e.g., 10,000,000 elements). Then it would only be the "major" list that would grow, and it would not grow very often.
Sadly, I need something much, much larger than this size for my array at its largest point in the loop. I think I'm reaching my limit much earlier then that.