How's about memory size usage
-
typedef struct{ char buff[20]; } my_struct; my_struct my_arr[1000][30][10][10][3]; int main(){ for ( int a = 0; a < 1000; ++a ) for ( int b = 0; b < 30; ++b ) for ( int c = 0; c < 10; ++c ) for ( int d = 0; d < 10; ++d ) for ( int e = 0; e < 3; ++e ) { my_arr[a][b][c][d][e].buff[0]='0'; my_arr[a][b][c][d][e].buff[19]='9'; } return 0; }
In previous thread, I have asked about the problem when using large data. Thanks so much for all of your replying. I based on yours comment to check my code again and I found that I had a mistake. Yes, the program run normally. Here I had changed code to another type:typedef struct{ CByteArray buff; } my_struct; typedef struct{ my_struct dat[30][10][10][3]; } data; CArray my_arr;
In CWinApp::InitInstance():int n=1000 my_arr.SetSize(n); for ( int a = 0; a < n; ++a ) for ( int b = 0; b < 30; ++b ) for ( int c = 0; c < 10; ++c ) for ( int d = 0; d < 10; ++d ) for ( int e = 0; e < 3; ++e ) { my_arr[a].dat[b][c][d][e].buff.SetSize(1); } }
I tested n from 100 to 1000 and CArray seem to be not effective. When n>=500, the program run more slowly. And when n=1000, the system alerted that out of virtual memory. Please note that the buff size is only 1 (beside 20 in previous version) In my project, buff size is varying. n is about 300-800 All remaining size of array are const Here are my questions: In this case, which is my best solution (Using CArray or not)? If anyone has experience in using CArray and processing large data, please give me a hint. Thank you very much. -
typedef struct{ char buff[20]; } my_struct; my_struct my_arr[1000][30][10][10][3]; int main(){ for ( int a = 0; a < 1000; ++a ) for ( int b = 0; b < 30; ++b ) for ( int c = 0; c < 10; ++c ) for ( int d = 0; d < 10; ++d ) for ( int e = 0; e < 3; ++e ) { my_arr[a][b][c][d][e].buff[0]='0'; my_arr[a][b][c][d][e].buff[19]='9'; } return 0; }
In previous thread, I have asked about the problem when using large data. Thanks so much for all of your replying. I based on yours comment to check my code again and I found that I had a mistake. Yes, the program run normally. Here I had changed code to another type:typedef struct{ CByteArray buff; } my_struct; typedef struct{ my_struct dat[30][10][10][3]; } data; CArray my_arr;
In CWinApp::InitInstance():int n=1000 my_arr.SetSize(n); for ( int a = 0; a < n; ++a ) for ( int b = 0; b < 30; ++b ) for ( int c = 0; c < 10; ++c ) for ( int d = 0; d < 10; ++d ) for ( int e = 0; e < 3; ++e ) { my_arr[a].dat[b][c][d][e].buff.SetSize(1); } }
I tested n from 100 to 1000 and CArray seem to be not effective. When n>=500, the program run more slowly. And when n=1000, the system alerted that out of virtual memory. Please note that the buff size is only 1 (beside 20 in previous version) In my project, buff size is varying. n is about 300-800 All remaining size of array are const Here are my questions: In this case, which is my best solution (Using CArray or not)? If anyone has experience in using CArray and processing large data, please give me a hint. Thank you very much.nguyenvhn wrote:
typedef struct{
CByteArray buff;
} my_struct;typedef struct{
my_struct dat[30][10][10][3];
} data;CArray my_arr;
From a cursory glance at MFC code, and unless I made a severe mistake: You are aware that every element in my_arr is of type
data
, which is holding nine thousand CByteArray's? CByteArray's which each are of size 20 bytes and requires construction, including construction of parent class object CObject? So just one single instance ofdata
at minimum requires 180.000 bytes. For 'n' in your code you are: 1. Creating m = n * 9000 CByteArray's, for a total of at least m * 20 bytes. 2. Iterating over those m ByteArrays, calling SetSize for them, which allocates at least 16 (or is it 32, I really don't remember the allocation granularity of the SBH) bytes each, for a size of m*16 or m*32 bytes. This becomes m*(20+16) or m*(20+32) bytes, minimum. If n=1, you allocate at least 324.000 bytes. If n=1000, you allocate at least 324.000.000 bytes. Should SBH allocation granularity be 32, then you would get a memory load of about 446MB just to hold those nine million bytes. Should you use the debug heap, I think you can safely add another m*16 or m*32 bytes (for a memory load between 446 and 720MB respectively). Just to create the nine million *holders* of the data, not the data itself, for the n=1000 case, you use 180 million bytes. In my project, buff size is varying. I suggest you at minimum should stop using MFC for collections (CByteArray even have a vfptr!), and start using a custom allocator. In this case, which is my best solution (Using CArray or not)? Not. To be completely honest I'd say using any MFC collection class is almost always the wrong thing, but that's a completely different discussion. -
typedef struct{ char buff[20]; } my_struct; my_struct my_arr[1000][30][10][10][3]; int main(){ for ( int a = 0; a < 1000; ++a ) for ( int b = 0; b < 30; ++b ) for ( int c = 0; c < 10; ++c ) for ( int d = 0; d < 10; ++d ) for ( int e = 0; e < 3; ++e ) { my_arr[a][b][c][d][e].buff[0]='0'; my_arr[a][b][c][d][e].buff[19]='9'; } return 0; }
In previous thread, I have asked about the problem when using large data. Thanks so much for all of your replying. I based on yours comment to check my code again and I found that I had a mistake. Yes, the program run normally. Here I had changed code to another type:typedef struct{ CByteArray buff; } my_struct; typedef struct{ my_struct dat[30][10][10][3]; } data; CArray my_arr;
In CWinApp::InitInstance():int n=1000 my_arr.SetSize(n); for ( int a = 0; a < n; ++a ) for ( int b = 0; b < 30; ++b ) for ( int c = 0; c < 10; ++c ) for ( int d = 0; d < 10; ++d ) for ( int e = 0; e < 3; ++e ) { my_arr[a].dat[b][c][d][e].buff.SetSize(1); } }
I tested n from 100 to 1000 and CArray seem to be not effective. When n>=500, the program run more slowly. And when n=1000, the system alerted that out of virtual memory. Please note that the buff size is only 1 (beside 20 in previous version) In my project, buff size is varying. n is about 300-800 All remaining size of array are const Here are my questions: In this case, which is my best solution (Using CArray or not)? If anyone has experience in using CArray and processing large data, please give me a hint. Thank you very much.I don't remember anything about the MFC collection classes, and I plan for it to remain that way. Can you tell us something about your application? Paul
-
nguyenvhn wrote:
typedef struct{
CByteArray buff;
} my_struct;typedef struct{
my_struct dat[30][10][10][3];
} data;CArray my_arr;
From a cursory glance at MFC code, and unless I made a severe mistake: You are aware that every element in my_arr is of type
data
, which is holding nine thousand CByteArray's? CByteArray's which each are of size 20 bytes and requires construction, including construction of parent class object CObject? So just one single instance ofdata
at minimum requires 180.000 bytes. For 'n' in your code you are: 1. Creating m = n * 9000 CByteArray's, for a total of at least m * 20 bytes. 2. Iterating over those m ByteArrays, calling SetSize for them, which allocates at least 16 (or is it 32, I really don't remember the allocation granularity of the SBH) bytes each, for a size of m*16 or m*32 bytes. This becomes m*(20+16) or m*(20+32) bytes, minimum. If n=1, you allocate at least 324.000 bytes. If n=1000, you allocate at least 324.000.000 bytes. Should SBH allocation granularity be 32, then you would get a memory load of about 446MB just to hold those nine million bytes. Should you use the debug heap, I think you can safely add another m*16 or m*32 bytes (for a memory load between 446 and 720MB respectively). Just to create the nine million *holders* of the data, not the data itself, for the n=1000 case, you use 180 million bytes. In my project, buff size is varying. I suggest you at minimum should stop using MFC for collections (CByteArray even have a vfptr!), and start using a custom allocator. In this case, which is my best solution (Using CArray or not)? Not. To be completely honest I'd say using any MFC collection class is almost always the wrong thing, but that's a completely different discussion.Thank you very much. I have no knowledge about custom allocator. Could you tell me more about it? In addition, Could you give me an article to make more clearly that "using any MFC collection class is almost always the wrong thing"? And so that, what is the true replacement? Please, I am using Visual C++ and MFC as the main programming framework. I hope to get lot of useful information from you. Thanks again.
-
I don't remember anything about the MFC collection classes, and I plan for it to remain that way. Can you tell us something about your application? Paul
Thanks for your replying. In before, my application has done well. But now the size of data is increase very high. My application must deal with its but I dont want to write new application from scratch. I want to update from old source code. In old version, I used the following declare: my_struct my_arr[C][W][D][P][S] with my_struct's size is 20 bytes as mention in previous thread. C, W, D, P, S are const and C=120. Now C can be 1000. I think my data structure may be too large. Could you give me any idea or example to solver this kind of problem? Thank you.
-
Thanks for your replying. In before, my application has done well. But now the size of data is increase very high. My application must deal with its but I dont want to write new application from scratch. I want to update from old source code. In old version, I used the following declare: my_struct my_arr[C][W][D][P][S] with my_struct's size is 20 bytes as mention in previous thread. C, W, D, P, S are const and C=120. Now C can be 1000. I think my data structure may be too large. Could you give me any idea or example to solver this kind of problem? Thank you.
If your 'my_struct' can remain 20 bytes then the following works for variable 'C', I don't recommend any type of container since the overhead of managing it and implementing 'operator[]' is probably going to be significant. The total allocation is 180MB, this is much less than commonly fitted as main RAM to modern PCs so the quantitiy per se isn't too much. You do need to consider how you access it in order to work with the CPU data caches rather than against them, but without knowing what you do to the data I can't help there.
struct my_struct
{
char buff [ 20 ] ;
} ;typedef my_struct my_struct_array [ 30 ][ 10 ][ 10 ][ 3 ] ;
const unsigned my_arr_size = 1000 ;int main()
{
my_struct_array * my_arr = new my_struct_array [ my_arr_size ] ;for ( unsigned a = 0; a < my\_arr\_size; ++a ) for ( unsigned b = 0; b < 30; ++b ) for ( unsigned c = 0; c < 10; ++c ) for ( unsigned d = 0; d < 10; ++d ) for ( unsigned e = 0; e < 3; ++e ) { my\_arr \[ a \]\[ b \]\[ c \]\[ d \]\[ e \].buff \[ 0 \] = '0' ; my\_arr \[ a \]\[ b \]\[ c \]\[ d \]\[ e \].buff \[ 19 \] = '9' ; } delete \[\] my\_arr ; return 0;
}
Paul
-
Thank you very much. I have no knowledge about custom allocator. Could you tell me more about it? In addition, Could you give me an article to make more clearly that "using any MFC collection class is almost always the wrong thing"? And so that, what is the true replacement? Please, I am using Visual C++ and MFC as the main programming framework. I hope to get lot of useful information from you. Thanks again.
nguyenvhn wrote: Could you give me an article to make more clearly that "using any MFC collection class is almost always the wrong thing"? Once you gain more experience in C++, It'll come to you. But to give a starting place, you might want to Google some news. And so that, what is the true replacement? There is no silver bullet.
-
If your 'my_struct' can remain 20 bytes then the following works for variable 'C', I don't recommend any type of container since the overhead of managing it and implementing 'operator[]' is probably going to be significant. The total allocation is 180MB, this is much less than commonly fitted as main RAM to modern PCs so the quantitiy per se isn't too much. You do need to consider how you access it in order to work with the CPU data caches rather than against them, but without knowing what you do to the data I can't help there.
struct my_struct
{
char buff [ 20 ] ;
} ;typedef my_struct my_struct_array [ 30 ][ 10 ][ 10 ][ 3 ] ;
const unsigned my_arr_size = 1000 ;int main()
{
my_struct_array * my_arr = new my_struct_array [ my_arr_size ] ;for ( unsigned a = 0; a < my\_arr\_size; ++a ) for ( unsigned b = 0; b < 30; ++b ) for ( unsigned c = 0; c < 10; ++c ) for ( unsigned d = 0; d < 10; ++d ) for ( unsigned e = 0; e < 3; ++e ) { my\_arr \[ a \]\[ b \]\[ c \]\[ d \]\[ e \].buff \[ 0 \] = '0' ; my\_arr \[ a \]\[ b \]\[ c \]\[ d \]\[ e \].buff \[ 19 \] = '9' ; } delete \[\] my\_arr ; return 0;
}
Paul