Reusing an Incremented Variable within a Single Statement
-
Skippums wrote:
I'm guessing VS2008 is smart enough not to allocate memory for it
It actual does since it runs out of registers (I was a little surprised by this, but it makes sense once you look at the dissassembled code.) In practice, though, I'd imagine the cache would prevent this from being a big performance hit.
One question: Have you measured how much faster those a-priori optimizations are running compared to the very cleanly coded algorithm from the post above (using std::copy)?
-
I don't get a list of the type I want back from the API function call... I have to call each API method on a per-element basis. Basically, I have a cell-array from Matlab being passed in:
// prhs is of type "mxArray const *"
size_t const indexCount = mxGetNumberOfElements(prhs[0]);
size_t const dataSize = mxGetNumberOfElements(prhs[1]);
double const ** data = new double const *[dataSize];
// Get the 1-based indices into the data array
unsigned __int32 const *const indices = static_cast<unsigned __int32 const *>(mxGetData(prhs[0]));
for (size_t i = 0; i < dataSize; ++i) {
// Get a double* to the data in the cell array at 0-indexed location i
data[i] = mxGetPr(mxGetCell(prhs[1], i));
}
--data; // Make data 1-indexed so I don't have to modify "indices" to be 0-basedThat is why I didn't do what you suggested.
Sounds like somebody's got a case of the Mondays -Jeff
So what's wrong with the wrapping class solution to convert your array to a 1 based one? Seeing your code there you could use std::generate on a std::vector to get the same effect as your manual loop. Generally if you're doing low level memory fiddling, pointer arithmetic and looping at the same time in C++ there's usually a simpler way. Cheers, Ash
-
One question: Have you measured how much faster those a-priori optimizations are running compared to the very cleanly coded algorithm from the post above (using std::copy)?
Repeating my edit above: Out of curiosity, I benchmarked the various algorithms using just an int array of 10,000 and 100,000 elements.
void Test1()
{
for (size_t i = 0; i < len; ++i)
pDst[i + 1] = pSrc[i];
}void Test2()
{
for (size_t i = 0; i < len; )
{
const int temp = pSrc[i];
pDst[++i] = temp;
}
}void Test3()
{
memcpy(&pDst[1], pSrc, len * sizeof(int));
}void Test4()
{
int *src_start = (int*) &pSrc[0];
int *src_end = (int*) &pSrc[len];std::copy(src\_start, src\_end, &pDst\[1\]);
}
These are very artificial tests, but as expected Test3 & Test 4 were fastest by about 15%. Test4 was often slightly faster by a few cycles than Test3. I scratched my head since both end up at memcpy(), but Test4 has more apparent overhead. But then I realized it was the
len * sizeof(int)
calculation that slightly slows Test3(). Surprisingly, Test2 was ever so slightly faster than Test1 (by about 0.1% - 0.5% on my system.) I suspect the CPU cache covers the "save" of the register. -
I assume you meant to use "i++" on the third line of your second example, but understand what you meant. I really thought that the statements:
int i = 1;
int j = (i) + ++i;would be equivalent to "int j = 3;", but no matter how I apply the parentheses, it always comes out as 4, just as your response predicts it would. Thanks,
Sounds like somebody's got a case of the Mondays -Jeff
-
Repeating my edit above: Out of curiosity, I benchmarked the various algorithms using just an int array of 10,000 and 100,000 elements.
void Test1()
{
for (size_t i = 0; i < len; ++i)
pDst[i + 1] = pSrc[i];
}void Test2()
{
for (size_t i = 0; i < len; )
{
const int temp = pSrc[i];
pDst[++i] = temp;
}
}void Test3()
{
memcpy(&pDst[1], pSrc, len * sizeof(int));
}void Test4()
{
int *src_start = (int*) &pSrc[0];
int *src_end = (int*) &pSrc[len];std::copy(src\_start, src\_end, &pDst\[1\]);
}
These are very artificial tests, but as expected Test3 & Test 4 were fastest by about 15%. Test4 was often slightly faster by a few cycles than Test3. I scratched my head since both end up at memcpy(), but Test4 has more apparent overhead. But then I realized it was the
len * sizeof(int)
calculation that slightly slows Test3(). Surprisingly, Test2 was ever so slightly faster than Test1 (by about 0.1% - 0.5% on my system.) I suspect the CPU cache covers the "save" of the register.Thanks for the insight. A nice lesson for the "early optimizers"...
-
Well actually, I am trying to write something like the following where I am copying from a 0-indexed array to a 1-indexed array:
const int src[] = {0, 1};
const size_t srcLen = sizeof(src) / sizeof(int);
int *const dst = new int[srcLen] - 1;
for (size_t i = 0; i < srcLen; ++i)
dst[i + 1] = src[i];NOTE: I have simplified the array copy in the above example... I am actually using an external API function to get the value of the src array elements, which is why I am not just using the statement "const int* const dst = &src[0] - 1;". The compiler will probably optimize this by keeping the value of i + 1 from the array assignment for the next loop iteration in a register (as opposed to recomputing it), but if I could write the for-loop as:
for (size_t i = 0; i < srcLen; )
dst[++i] = src[i];This would *almost* guarantee that i is incremented only once. Given the initial response, I can't have a single statement with two different values for i, so I'm not able to do what I wanted anyway. It would probably have a higher probability of being optimized if I wrote the loop as:
for (size_t i = 0; i < srcLen; ) {
const int temp = src[i];
dst[++i] = temp;
}Almost every compiler available would put temp in a register instead of allocating it on the stack, but I really don't want to write the loop like that. Guess I'll keep my fingers crossed that the compiler optimizes it! (I don't really care timing-wise as the loop is only iterating over a couple hundred thousand elements one time per run, which is an insignificant amount of time as a fraction of the program's run-time, but why not write efficient code when you can?)
Sounds like somebody's got a case of the Mondays -Jeff
If your intent is accessing two arrays within a loop, then - no matter the relation between indices - the fastest way would be to use two pointers to the individual elements and increment these. If you're so intent on improving performance, consider this: every direct access to an array element via an index value requires 1. loading the start address of the array 2. get the size of an element 3. multiplying that by the index(-1) 4. adding that to the start address 5. dereference this address to get to the actual element. As opposed to using pointers which just requires 1. load the pointer 2. dereference it Of course, incrementing the pointers eats up most of this advantage, as you have to add the element size each iteration. And most likely a good optimizer will convert your code into something like this anyway. What I want to say, using an index is just complicating things unneccesarily if all you want to do is access each element sequentially.
-
Well actually, I am trying to write something like the following where I am copying from a 0-indexed array to a 1-indexed array:
const int src[] = {0, 1};
const size_t srcLen = sizeof(src) / sizeof(int);
int *const dst = new int[srcLen] - 1;
for (size_t i = 0; i < srcLen; ++i)
dst[i + 1] = src[i];NOTE: I have simplified the array copy in the above example... I am actually using an external API function to get the value of the src array elements, which is why I am not just using the statement "const int* const dst = &src[0] - 1;". The compiler will probably optimize this by keeping the value of i + 1 from the array assignment for the next loop iteration in a register (as opposed to recomputing it), but if I could write the for-loop as:
for (size_t i = 0; i < srcLen; )
dst[++i] = src[i];This would *almost* guarantee that i is incremented only once. Given the initial response, I can't have a single statement with two different values for i, so I'm not able to do what I wanted anyway. It would probably have a higher probability of being optimized if I wrote the loop as:
for (size_t i = 0; i < srcLen; ) {
const int temp = src[i];
dst[++i] = temp;
}Almost every compiler available would put temp in a register instead of allocating it on the stack, but I really don't want to write the loop like that. Guess I'll keep my fingers crossed that the compiler optimizes it! (I don't really care timing-wise as the loop is only iterating over a couple hundred thousand elements one time per run, which is an insignificant amount of time as a fraction of the program's run-time, but why not write efficient code when you can?)
Sounds like somebody's got a case of the Mondays -Jeff
-
"why not write efficient code when you can?" X| Don't be daft. Write readable code when you can. Optimize when necessary and use a profiler to detect inefficencies.
-
Why use a performance profiler when I can just guess which parts will be inefficient during implementation? And for the record, my code is readable; the compiler understands it just fine. :-D
Sounds like somebody's got a case of the Mondays -Jeff
-
Skippums wrote:
Why use a performance profiler when I can just guess which parts will be inefficient
Unless you're writing Hello World, you will be surprised.
-
I guess I need to be more explicit when I type sarcastic comments. Sadly, I couldn't find the appropriate voice inflection characters on my keyboard. :)
Sounds like somebody's got a case of the Mondays -Jeff