For Tim and Mike
-
I thought I would start a new thread to bring this to your attention. I finally found the information I mentioned in an earlier thread. STL Algorithms vs. Hand-Written Loops This is an article by Scott Meyers, excepted from his Effective STL book. Hope it clears up any mis-information from my previous postings.
-
I thought I would start a new thread to bring this to your attention. I finally found the information I mentioned in an earlier thread. STL Algorithms vs. Hand-Written Loops This is an article by Scott Meyers, excepted from his Effective STL book. Hope it clears up any mis-information from my previous postings.
Ok, I agree almost totally with the article's point, but I (and I think most C++ users) find this (excerpts from article):
vector v;
int x, y;
...
// iterate from v.begin() until an
// appropriate value is found or
// v.end() is reached
vector::iterator i = v.begin();
for( ; i != v.end(); ++i) {
if (*i > x && *i < y) break;
}
// i now points to the value
// or is the same as v.end()or, better, this:
vector v;
int x, y;
...
// iterate from v.begin() until an
// appropriate value is found or
// v.end() is reached
vector::iterator i = v.begin();while ( (i != v.end()) && !(*i > x && *i < y) )
++i;// i now points to the value
// or is the same as v.end()much more readable than this:
// find the first value val where the
// "and" of val > x and val < y is true
vector iterator i =
find_if(v.begin(), v.end(),
compose2(logical_and(),
bind2nd(greater(), x),
bind2nd(less(), y)));and to the hell some optimization the compiler may do! Crivo Automated Credit Assessment
-
I thought I would start a new thread to bring this to your attention. I finally found the information I mentioned in an earlier thread. STL Algorithms vs. Hand-Written Loops This is an article by Scott Meyers, excepted from his Effective STL book. Hope it clears up any mis-information from my previous postings.
Ah, that is what you are talking about. Let's just say I don't agree with him. Something about having to hop all over the code every time I try do a loop makes my skin crawl. His 3 basic points: Efficiency: Algorithms are often more efficient than the loops programmers produce. I would have to see strong proof of this. But it is true that you can write loops in STL that perform badly. (i.e. "for (int i = 0; i < some_std_vector .size (); ++i)") But then you get into execution efficiency. CALLs are expensive. On a good day, these methods called by for_each would be hoisted and the CALL eliminated. However, there is a limit to this. Then you have the problem of overly nested inline code (which is basically which templates are when the routines aren't too large). The optimizer has more problems optimizing nested template code than it does less nested code. When I tested this, I compared the find_if routine to a hand built routine. In debug mode (which is of note but not important), the hand written routine was 30% faster when using the standard "for (ix = v.begin(); ix!=v.end();++ix)" loop. On a whim, I changed the loop to cache the end pointer thus making the loop "for (ix = v.begin(); ix!=e;++ix)". The handwritten version was over 90% faster. Now, where it really counts in release mode. The handwritten routine was still 30% faster EVEN AFTER the call to the BetweenValues test routine had been hoisted into the main routine thus eliminating all calls. I think this is another case of something that in theory is faster, but in the real world, it is actually slower. Correctness: Writing loops is more subject to errors than calling algorithms. I will give him this one. However, I think the stats of loop bugs is rather small. Are you just shifting one point of error to another point of error when you need more complex loop termination and processing? Real world data: Loops and iteration bugs: 0.74% (total) Terminal value or condition: 0.33% Iteration variable processing: 0.01% Other loop and iteration: 0.40% Maintainability: Algorithm calls often yield code that is clearer and more straightforward than the corresponding explicit loops. I will go to my grave arguing against this. I just don't see how scattering your code all over humanity is a plus. IMHO, as a project grows in size, all these extra loop processing routines will actually degrade maintainability by drastically increasing overall code complexity. In my software, I have over 1000
-
Ah, that is what you are talking about. Let's just say I don't agree with him. Something about having to hop all over the code every time I try do a loop makes my skin crawl. His 3 basic points: Efficiency: Algorithms are often more efficient than the loops programmers produce. I would have to see strong proof of this. But it is true that you can write loops in STL that perform badly. (i.e. "for (int i = 0; i < some_std_vector .size (); ++i)") But then you get into execution efficiency. CALLs are expensive. On a good day, these methods called by for_each would be hoisted and the CALL eliminated. However, there is a limit to this. Then you have the problem of overly nested inline code (which is basically which templates are when the routines aren't too large). The optimizer has more problems optimizing nested template code than it does less nested code. When I tested this, I compared the find_if routine to a hand built routine. In debug mode (which is of note but not important), the hand written routine was 30% faster when using the standard "for (ix = v.begin(); ix!=v.end();++ix)" loop. On a whim, I changed the loop to cache the end pointer thus making the loop "for (ix = v.begin(); ix!=e;++ix)". The handwritten version was over 90% faster. Now, where it really counts in release mode. The handwritten routine was still 30% faster EVEN AFTER the call to the BetweenValues test routine had been hoisted into the main routine thus eliminating all calls. I think this is another case of something that in theory is faster, but in the real world, it is actually slower. Correctness: Writing loops is more subject to errors than calling algorithms. I will give him this one. However, I think the stats of loop bugs is rather small. Are you just shifting one point of error to another point of error when you need more complex loop termination and processing? Real world data: Loops and iteration bugs: 0.74% (total) Terminal value or condition: 0.33% Iteration variable processing: 0.01% Other loop and iteration: 0.40% Maintainability: Algorithm calls often yield code that is clearer and more straightforward than the corresponding explicit loops. I will go to my grave arguing against this. I just don't see how scattering your code all over humanity is a plus. IMHO, as a project grows in size, all these extra loop processing routines will actually degrade maintainability by drastically increasing overall code complexity. In my software, I have over 1000
You've shaken my faith, you mean some authors don't really know what its like in the real world? I thought the definition of author was, "I know how to do it better than you." :) At least that's the impression from some. Thanks for the timing information. I never really had the desire to find out, none of my projects really needed that level of optimization, until now. That's why the origin of this came to mind.
-
You've shaken my faith, you mean some authors don't really know what its like in the real world? I thought the definition of author was, "I know how to do it better than you." :) At least that's the impression from some. Thanks for the timing information. I never really had the desire to find out, none of my projects really needed that level of optimization, until now. That's why the origin of this came to mind.
"I know how to do it better than you." I thought that was the definition of programmer. :) Take the timing stuff with a grain of salt. It tested one method, on one compiler. Hardly conclusive proof. More like food for thought. :) Tim Smith Descartes Systems Sciences, Inc.
-
You've shaken my faith, you mean some authors don't really know what its like in the real world? I thought the definition of author was, "I know how to do it better than you." :) At least that's the impression from some. Thanks for the timing information. I never really had the desire to find out, none of my projects really needed that level of optimization, until now. That's why the origin of this came to mind.
i always thought authors were coders who didn't code anymore :) --- "every year we invent better idiot proof systems and every year they invent better idiots ... and the linux zealots still aren't being sterilized"
-
i always thought authors were coders who didn't code anymore :) --- "every year we invent better idiot proof systems and every year they invent better idiots ... and the linux zealots still aren't being sterilized"
-
i always thought authors were coders who didn't code anymore He who can does, he who cannot teaches (or writes). :laugh:
MyDotNet wrote: He who can does, he who cannot teaches (or writes). And he who can't teach, teaches teachers...
-
i always thought authors were coders who didn't code anymore He who can does, he who cannot teaches (or writes). :laugh:
i always thought authors were coders who didn't code anymore He who can does, he who cannot teaches (or writes). And from code I've seen, you could also say "he who cannot communicate with humans, writes code for computers". Honestly, I have to say I'm getting really tired of this idea that as soon as someone writes a book, or devotes some time to teaching, they're assumed to be poor doers. Tim Lesher http://www.lesher.ws
-
Ah, that is what you are talking about. Let's just say I don't agree with him. Something about having to hop all over the code every time I try do a loop makes my skin crawl. His 3 basic points: Efficiency: Algorithms are often more efficient than the loops programmers produce. I would have to see strong proof of this. But it is true that you can write loops in STL that perform badly. (i.e. "for (int i = 0; i < some_std_vector .size (); ++i)") But then you get into execution efficiency. CALLs are expensive. On a good day, these methods called by for_each would be hoisted and the CALL eliminated. However, there is a limit to this. Then you have the problem of overly nested inline code (which is basically which templates are when the routines aren't too large). The optimizer has more problems optimizing nested template code than it does less nested code. When I tested this, I compared the find_if routine to a hand built routine. In debug mode (which is of note but not important), the hand written routine was 30% faster when using the standard "for (ix = v.begin(); ix!=v.end();++ix)" loop. On a whim, I changed the loop to cache the end pointer thus making the loop "for (ix = v.begin(); ix!=e;++ix)". The handwritten version was over 90% faster. Now, where it really counts in release mode. The handwritten routine was still 30% faster EVEN AFTER the call to the BetweenValues test routine had been hoisted into the main routine thus eliminating all calls. I think this is another case of something that in theory is faster, but in the real world, it is actually slower. Correctness: Writing loops is more subject to errors than calling algorithms. I will give him this one. However, I think the stats of loop bugs is rather small. Are you just shifting one point of error to another point of error when you need more complex loop termination and processing? Real world data: Loops and iteration bugs: 0.74% (total) Terminal value or condition: 0.33% Iteration variable processing: 0.01% Other loop and iteration: 0.40% Maintainability: Algorithm calls often yield code that is clearer and more straightforward than the corresponding explicit loops. I will go to my grave arguing against this. I just don't see how scattering your code all over humanity is a plus. IMHO, as a project grows in size, all these extra loop processing routines will actually degrade maintainability by drastically increasing overall code complexity. In my software, I have over 1000
Tim Smith wrote: Real world data: Loops and iteration bugs: 0.74% (total) Terminal value or condition: 0.33% Iteration variable processing: 0.01% Other loop and iteration: 0.40% Hey, this is cool! Where did you find these stats ? Where can I find more ? I'm trying to introduce automated test scripts for my staff and some stats could be useful both in convincing them and where to start. Crivo Automated Credit Assessment