goto statement
-
My point wasn't about code that died decades ago, but code that continued to live and be modified over decades! Readability is just one aspect, maintainability is more important. Besides, the author himself stated in the comments that his reason for using goto was implementing multistate transitions. Come on! I've used tools to generate those automatically from UML 10 years ago! And I could even choose if I wanted to generate the statemachine using swicth or inheritance! In other words, there are valid alternatives and even tools that help you generate the code.
GOTOs are a bit like wire coat hangers: they tend to breed in the darkness, such that where there once were few, eventually there are many, and the program's architecture collapses beneath them. (Fran Poretto)
Stefan_Lang wrote:
And I could even choose if I wanted to generate the statemachine using swicth or inheritance! In other words, there are valid alternatives and even tools that help you generate the code.
None of which however addresses whether those generated solutions are optimal performance implementations. In my experience performance optimization, based on actual bottleneck analysis, can often lead to code that is less than ideal in some other aspect such as design and/or maintenance.
-
Tools can only get you so far, and they're not suitable for prooving code correctness. So I'm not sure why you brought that up. As for benefits, I've read and taken part in countless discussions, and not a single example brought up managed to convince me. In every single case there was a suitable alternative using standard control statements. Most of the time the person bringing up either didn't come up with the proper way, or considered the effort of writing 2-5 additional lines of code too much to bear. Based on that experience I'm convinced that there is always a better alternative. People claiming otherwise are just not sufficiently experienced to see it, or understand the need. That said, all this assumes you're looking at code where proper coding guidelines and style even makes sense to take care of: if you're just programming away a piece of throw-away-code, then yes, use whatever suits you best and solves the problem. In actual production code that is going to live through years of maintenance and adding features, the presumed benefits of goto never outweigh the long term maintenance problems.
GOTOs are a bit like wire coat hangers: they tend to breed in the darkness, such that where there once were few, eventually there are many, and the program's architecture collapses beneath them. (Fran Poretto)
Stefan_Lang wrote:
Based on that experience I'm convinced that there is always a better alternative.
Could be. But I don't write code for fun but rather I get paid for it and it is often critical code that I can't spend weeks finding an optimal solution but rather the first one that is good enough goes out the door. Nor can I refactor millions of lines of code every two weeks every time I figure out a "better" way to do it. And neither can the guy that is going to maintain my code after I am gone.
Stefan_Lang wrote:
People claiming otherwise are just not sufficiently experienced to see it, or understand the need.
Of course there are always people willing to rationalize that their way is "best" despite the fact that they can't demonstrate that with objective data and often can't even construct a coherent argument as to why it is "best". And technology rationalizations are often based on nothing but technology while ignoring the realities of delivering software in a business environment.
Stefan_Lang wrote:
In actual production code that is going to live through years of maintenance and adding features, the presumed benefits of goto never outweigh the long term maintenance problems.
That would of course be an excellent argument if in fact none of the following was true. - Maintenance was the sole and only driving business requirement. - The business had a firm enough grasp on process control to be able to quantify maintenance costs. - The process control was structured enough that it could enforce quality on the entire rest of the enterprise and to such an extent that the trivial cost of infrequent code misuse rose above the most miniscule noise level of maintenance cost. Versus for example, no requirements, poor requirements, unused requirements, invalid requirements, zero architecture, chaotic process management, etc, etc, etc.
-
My point wasn't about code that died decades ago, but code that continued to live and be modified over decades! Readability is just one aspect, maintainability is more important. Besides, the author himself stated in the comments that his reason for using goto was implementing multistate transitions. Come on! I've used tools to generate those automatically from UML 10 years ago! And I could even choose if I wanted to generate the statemachine using swicth or inheritance! In other words, there are valid alternatives and even tools that help you generate the code.
GOTOs are a bit like wire coat hangers: they tend to breed in the darkness, such that where there once were few, eventually there are many, and the program's architecture collapses beneath them. (Fran Poretto)
We essentially agree that gotos are undesirable the vast majority of the time. I'm just not positive that gotos are bad all of the time. One can develop tunnel realities, even with decades of experience. Take the saying at the end of your post: "GOTOs are a bit like wire coat hangers: they tend to breed in the darkness, such that where there once were few, eventually there are many, and the program's architecture collapses beneath them. (Fran Poretto)" While I expect that is often the case, I would think it untrue in some situations. If a well-managed team were to find a reason to use a goto, and the team had guidelines that were documented in the code around the goto to not do this elsewhere, then I don't think that goto's would necessarily proliferate. But again, I've always used break or continue and avoided them too, and I agree with your philosophies, just not the absolutist part of your philosophy. I don't know absolutely every situation for ever program in every context, and there actually is a documented benefit to gotos in some circumstances. Even though there can be dire consequences from using a single goto, nonetheless the tradeoff might be that without using a goto, something isn't fast enough to do that job. In that case, the developer might decide that practicality takes precedence over good software engineering. I can't prove that situation doesn't exist. So I can't be absolute about the rule. I don't get that someone posted that goto's were needed for a state machine. A switch statement works, and there are other solutions too. The book "Design Patterns" provides the Strategy pattern, which (and I expect you know this already Stefan) where different "state" classes derive from a common base class, and switching states is done by switching the type of object. Virtual methods are called on the current state object. And, except in rare instances, I suspect a goto isn't much faster than well-written code, and probably even slower in some cases. Today, often instruction cache fetch limitations, data cache fetch limitations, or instruction ordering (the last less of an issue with modern Intel compilers than it used to be) affect performance more than the number of instructions in the code path. (I know you know this too). So, in general, the only way today to find out if code is faster is to profile the code. (Note, I wrote "in general" there - of course there are exceptions). I would need pretty strong proof in a particular situation to even consider that using a got
-
Ah, let me guess, your brain is mutilated by that pathetic OOP thingy? Ok. Carry on writing your buggy, slow, unreadable code. But do not lecture the real programmers on how to do things the right way. You'll never be able to implement a bytecode interpreter faster than this one.
Look, I disagree with the absolutist position too, but I agree that using goto's is generally bad. I just acknowledge there might be exceptions. But, having worked on code with gotos, which I did not write, but I had to maintain it, and having worked on object-oriented code, in my experience the code with gotos had both many more bugs and also worse bugs. I've had this same experience in more than one job too, so I see that as a recurring pattern. And, I've also seen object-oriented code that was faster than the older C code. Efficiency has to do with many other factors than the language used. If only encapsulation is used in C++, there is no calling penalty and the generated code is essentially C code. And, 'often' the overhead of calling virtual functions is typically less than using a 'switch' statements, so if the setting that is being tested is in a loop, doing the switch statement outside of the loop and choosing an object with a virtual function based on the switch results can be much faster. In C, you can do that same thing with function pointers, but it's more complicated, and people don't typically write code that way in C. In C++. it's very simple to do that, plus, there are other benefits. Here's another issue about performance unrelated to the language used: I once unrolled a loop for a floating point routine and the code got significantly faster. I did the same thing for a routine that did the same calculations, but was for a different processor. That routine used integer arithmetic. When I unrolled the loop, the routine got much slower! The integer routine required shifting the products down after every multiplication. These extra shift instructions made the code grow to over 4Kbytes, and 4Kbytes is the size of the instruction cache. The code was cache-thrashing. I've seen the pattern of people imagining their code was faster than some other implementation, when a profiler later showed that the other implementation was actually faster. Modern systems do multiple levels of caching for both instructions and data, do branch prediction, and even do instruction reordering based on both the instruction types and register pressure. Predicting how fast code will run is more complicated today than it ever was. Without using a profiler, it's usually just guesswork. Finally, languages are tools. Use the right tool for the job. If you think object-oriented languages are slow and buggy, then you definitely don't understand them. Also, performance is not always the most
-
Stefan_Lang wrote:
And I could even choose if I wanted to generate the statemachine using swicth or inheritance! In other words, there are valid alternatives and even tools that help you generate the code.
None of which however addresses whether those generated solutions are optimal performance implementations. In my experience performance optimization, based on actual bottleneck analysis, can often lead to code that is less than ideal in some other aspect such as design and/or maintenance.
Please tell us about your experience. In some projects I worked on, there was UI code, audio code, and video code. The video code was the bottleneck. I can't imagine it making any sense to change the design of the system to fix the video code. Nor was the overall design of the video codec redesigned. Only minor implementation changes were made. And, ironically, using a goto for performance optimization is often an example of code that is less than ideal. (Note, I didn't write "always" - I can't know that because I don't know all possible situations - and sometimes that last tiny bit of performance gain does matter - but such situations are certainly extremely rare). As an aside, to make a generalization (and most generalizations are false, including this one!), the overall concern for code should be maintainability. Typically, 85% of the cost (or time) for software is maintenance, so making code easy to maintain usually should overrides all other concerns. Also, for most software, performance isn't an issue. So, if a goto is ever justified, it would have to be in a fringe case.
-
Threaded OCaml bytecode interpreter is several times faster than the switch-based one. Far from being "a very small speed increase".
Yes, but I wrote: "Using a goto is likely to only result in a very small speed increase." (Of course, that is only in specific situations, as listed in the paper I mentioned earlier by Hopkins). I did not write anything at all to refute what you wrote.
-
Are you deliberately avoiding commenting on OCaml bytecode interpreter example? Show me the "better alternative", or admit that your so called "experience" is deeply flawed and very limited.
You replied to me, not Stefan. Or it might be that you're arguing about switch statements when this topic is about the goto. Since you haven't written that the OCaml bytecode interpreter uses gotos, it's not clear where you're going. Plus, is there only one difference between the two implementations. How does this compare to using virtual functions to represent state. Perhaps that would be faster still.
-
Goto "debate" is not over. Dijkstra had a bit of trolling, and now hordes of incompetent dummies are taking his jokes as some kind of sacred revelation. There are *no* arguments against goto, besides complete ignorance of the opponents. I pointed to several code examples which absolutely *must* use goto. And you, goto haters, as usual, ignored the uncomfortable truth. Mind explaining, how would you rewrite OCaml bytecode interpreter without goto? Code is here, in case if goto haters are as low as I suspect and cannot even use google: https://github.com/ocaml/ocaml/blob/trunk/byterun/interp.c[^] And please, mind explaining, how exactly this Knuth's code is "unreadable": http://www.literateprogramming.com/adventure.pdf[^]
Of course there are arguments against using the goto. Have you actually read Dijkstra's paper? I did, and it was not a joke. I also read Hopkin's paper, "A Case For The Goto". That was a serious paper too.
-
No way. Switch is semantically more complex, and if you want to implement state machine over it, you need to introduce additional entities - namely, current state variable. And, in case of C, there is a horrible way to fail by forgetting a break statement - which also has nothing to do with the essence of a mere state transition.
The argument about forgetting a break statement is not valid as that could apply to forgetting a goto statement too. And, the Strategy pattern can implement a state machine. There aren't just two alternatives.
-
Stefan_Lang wrote:
And I could even choose if I wanted to generate the statemachine using swicth or inheritance! In other words, there are valid alternatives and even tools that help you generate the code.
None of which however addresses whether those generated solutions are optimal performance implementations. In my experience performance optimization, based on actual bottleneck analysis, can often lead to code that is less than ideal in some other aspect such as design and/or maintenance.
The case I mentioned was an embedded system where the state machines modelled hardware actors and sensors. And yes, we had critical performance conditions too. The performance of the state machine implementation was never an issue, far from it, even though we did have serious problems staying within the given hard timing limits! And when I say hard timing limits, I mean hard - it wasn't a case of users having to wait a bit longer because of clogged video streaming, it was a case of users potentially being seriously injured or not. Besides, any decent compiler will translate a switch statement into a goto anyway, so no need at all to obfuscate your code and decrease readability and maintainability. Even worse: explicitely programming gotos may prevent the compiler from optimizing your code, so a set of gotos replacing a switch may in fact incur a performance penalty, if you're not very careful!
GOTOs are a bit like wire coat hangers: they tend to breed in the darkness, such that where there once were few, eventually there are many, and the program's architecture collapses beneath them. (Fran Poretto)
-
Stefan_Lang wrote:
Based on that experience I'm convinced that there is always a better alternative.
Could be. But I don't write code for fun but rather I get paid for it and it is often critical code that I can't spend weeks finding an optimal solution but rather the first one that is good enough goes out the door. Nor can I refactor millions of lines of code every two weeks every time I figure out a "better" way to do it. And neither can the guy that is going to maintain my code after I am gone.
Stefan_Lang wrote:
People claiming otherwise are just not sufficiently experienced to see it, or understand the need.
Of course there are always people willing to rationalize that their way is "best" despite the fact that they can't demonstrate that with objective data and often can't even construct a coherent argument as to why it is "best". And technology rationalizations are often based on nothing but technology while ignoring the realities of delivering software in a business environment.
Stefan_Lang wrote:
In actual production code that is going to live through years of maintenance and adding features, the presumed benefits of goto never outweigh the long term maintenance problems.
That would of course be an excellent argument if in fact none of the following was true. - Maintenance was the sole and only driving business requirement. - The business had a firm enough grasp on process control to be able to quantify maintenance costs. - The process control was structured enough that it could enforce quality on the entire rest of the enterprise and to such an extent that the trivial cost of infrequent code misuse rose above the most miniscule noise level of maintenance cost. Versus for example, no requirements, poor requirements, unused requirements, invalid requirements, zero architecture, chaotic process management, etc, etc, etc.
jschell wrote:
Nor can I refactor millions of lines of code every two weeks every time I figure out a "better" way to do it.
Agreed, The code I'm working on has quite a few gotos, but I don't have the time to dig through the code and lack the test cases to do a secure refactoring, so they'll remain right there, unless I find the code is broken.
jschell wrote:
And technology rationalizations are often based on nothing but technology while ignoring the realities of delivering software in a business environment.
Absolutely, the points I've made refer to creating new code, not modifiying existing one to either insert or remove gotos. My point is that you shouldn't use goto in new code, or insert it into existing code where there is no gto yet. I claim that if you see no good or at least equivalent alternative using other language constructs, then maybe you haven't looked hard enough. I willing to concede that there may be cases where there is a real benefit if using goto over any alternative, but I can't think of an example in C++, as long as you're using a decent compiler with a good optimizer that will translate alternate control statements into gotos anyway.
jschell wrote:
That would of course be an excellent argument if in fact none of the following was true.
- Maintenance was the sole and only driving business requirement.
- The business had a firm enough grasp on process control to be able to quantify maintenance costs.
- The process control was structured enough ...Maintenance doesn't need to be sole requirement and of course never is. But ignoring it would be a falacity, unless your application is supposed to be throw-away code that shouldn't be maintained (and I already said that for that kind of code all bets are off - there's no point in discussing coding guidelines at all A business not able to quantify and control maintenance cost will soon be out of business, specifically in software development). There's a reason why there are SCRUM, XP, Agile, (R)UP, etc.. Ideally process control should indeed enforce quality on the entire enterprise - that's what they're modelled to achieve! We as software developers should strive to contribute towards that goal and leave the decision whether our efforts result in small or big gains to the project leaders. In my experience, while maintenance cost is considerab
-
We essentially agree that gotos are undesirable the vast majority of the time. I'm just not positive that gotos are bad all of the time. One can develop tunnel realities, even with decades of experience. Take the saying at the end of your post: "GOTOs are a bit like wire coat hangers: they tend to breed in the darkness, such that where there once were few, eventually there are many, and the program's architecture collapses beneath them. (Fran Poretto)" While I expect that is often the case, I would think it untrue in some situations. If a well-managed team were to find a reason to use a goto, and the team had guidelines that were documented in the code around the goto to not do this elsewhere, then I don't think that goto's would necessarily proliferate. But again, I've always used break or continue and avoided them too, and I agree with your philosophies, just not the absolutist part of your philosophy. I don't know absolutely every situation for ever program in every context, and there actually is a documented benefit to gotos in some circumstances. Even though there can be dire consequences from using a single goto, nonetheless the tradeoff might be that without using a goto, something isn't fast enough to do that job. In that case, the developer might decide that practicality takes precedence over good software engineering. I can't prove that situation doesn't exist. So I can't be absolute about the rule. I don't get that someone posted that goto's were needed for a state machine. A switch statement works, and there are other solutions too. The book "Design Patterns" provides the Strategy pattern, which (and I expect you know this already Stefan) where different "state" classes derive from a common base class, and switching states is done by switching the type of object. Virtual methods are called on the current state object. And, except in rare instances, I suspect a goto isn't much faster than well-written code, and probably even slower in some cases. Today, often instruction cache fetch limitations, data cache fetch limitations, or instruction ordering (the last less of an issue with modern Intel compilers than it used to be) affect performance more than the number of instructions in the code path. (I know you know this too). So, in general, the only way today to find out if code is faster is to profile the code. (Note, I wrote "in general" there - of course there are exceptions). I would need pretty strong proof in a particular situation to even consider that using a got
Bill_Hallahan wrote:
Take the saying at the end of your post
The keyword is "tend". I liked that statement (also found in this thread, btw.), so I put it in my sig. Please note that I did not state that gotos are always bad, only that there are always alternatives. I agree that such an alternative may only be "equivalent", not better, depending on how you measure "goodness" of the code. As I said in another post: a decent C++ compiler will translate most control statements into gotos anyway, and optimize them in ways you may not even have thought of when considering your variant of goto coding. Therefore there is likely no performance benefit, and you actually may risk performance penalties by being more explicit about how exactly you like to use and place your gotos. I think we agree on pretty much everything else anyway. thanks for your thoughtful responses.
GOTOs are a bit like wire coat hangers: they tend to breed in the darkness, such that where there once were few, eventually there are many, and the program's architecture collapses beneath them. (Fran Poretto)
-
What? SSA transform is performed on a CFG, and it absolutely does not matter if there are irreducible sub-graphs in the flow. Not to mention a trivial fact that irreducible CFG can always be transformed into a reducible one by subgraph cloning.
Yes, but injudicious use of low-level constructs such as goto will add complexity to the CFG. I know every loop, if statement etc. is implemented using branches - but they are there explicitly to avoid needing to use those lower level constructs. Is like the argument that assembler is more efficient. Except it usually isn't in the presence of modern architectural considerations such as caching and instruction pipelines. Modern compilers can target these architectures better, and higher-level constructs similarly give the compiler hints about the intent of code that are not available with the lower-level cases. I will acknowledge there are places where it is useful, but I'd go with the advice that it should be avoided in 99.99% of cases, and even then only if you're fully aware of the repercussions. In my experience, almost all of the uses of goto's can be replaced by higher-level constructs. There are exceptions, like the OCAML usage, but that's a fairly unusual case to be working on code like that. Many parser generators also generate code using goto's - that one is less arguable as there are frequently better alternatives.
"If you don't fail at least 90 percent of the time, you're not aiming high enough." Alan Kay.
-
Why many hate this statement and do not advise using it! I used it when I started programming with BASIC and GWBASIC. It is also found in the C#. Troubles are based on the programmer who is misusing it.
-
switch
GOTOs are a bit like wire coat hangers: they tend to breed in the darkness, such that where there once were few, eventually there are many, and the program's architecture collapses beneath them. (Fran Poretto)
-
Bill_Hallahan wrote:
Take the saying at the end of your post
The keyword is "tend". I liked that statement (also found in this thread, btw.), so I put it in my sig. Please note that I did not state that gotos are always bad, only that there are always alternatives. I agree that such an alternative may only be "equivalent", not better, depending on how you measure "goodness" of the code. As I said in another post: a decent C++ compiler will translate most control statements into gotos anyway, and optimize them in ways you may not even have thought of when considering your variant of goto coding. Therefore there is likely no performance benefit, and you actually may risk performance penalties by being more explicit about how exactly you like to use and place your gotos. I think we agree on pretty much everything else anyway. thanks for your thoughtful responses.
GOTOs are a bit like wire coat hangers: they tend to breed in the darkness, such that where there once were few, eventually there are many, and the program's architecture collapses beneath them. (Fran Poretto)
Thank you Stefan.
-
Look, I disagree with the absolutist position too, but I agree that using goto's is generally bad. I just acknowledge there might be exceptions. But, having worked on code with gotos, which I did not write, but I had to maintain it, and having worked on object-oriented code, in my experience the code with gotos had both many more bugs and also worse bugs. I've had this same experience in more than one job too, so I see that as a recurring pattern. And, I've also seen object-oriented code that was faster than the older C code. Efficiency has to do with many other factors than the language used. If only encapsulation is used in C++, there is no calling penalty and the generated code is essentially C code. And, 'often' the overhead of calling virtual functions is typically less than using a 'switch' statements, so if the setting that is being tested is in a loop, doing the switch statement outside of the loop and choosing an object with a virtual function based on the switch results can be much faster. In C, you can do that same thing with function pointers, but it's more complicated, and people don't typically write code that way in C. In C++. it's very simple to do that, plus, there are other benefits. Here's another issue about performance unrelated to the language used: I once unrolled a loop for a floating point routine and the code got significantly faster. I did the same thing for a routine that did the same calculations, but was for a different processor. That routine used integer arithmetic. When I unrolled the loop, the routine got much slower! The integer routine required shifting the products down after every multiplication. These extra shift instructions made the code grow to over 4Kbytes, and 4Kbytes is the size of the instruction cache. The code was cache-thrashing. I've seen the pattern of people imagining their code was faster than some other implementation, when a profiler later showed that the other implementation was actually faster. Modern systems do multiple levels of caching for both instructions and data, do branch prediction, and even do instruction reordering based on both the instruction types and register pressure. Predicting how fast code will run is more complicated today than it ever was. Without using a profiler, it's usually just guesswork. Finally, languages are tools. Use the right tool for the job. If you think object-oriented languages are slow and buggy, then you definitely don't understand them. Also, performance is not always the most
Bill_Hallahan wrote:
Without using a profiler, it's usually just guesswork.
That point needs clarification. I don't deliver code. I deliver systems. The customers that use my systems don't care if a for loop is faster or not. They do care how fast the business functions of the system are though. There is no way that I can guess which piece of code is going to prove to be a bottleneck in a system. And I haven't worked with anyone that can do that either. So profiling is always required. Not to mention of course that profiling systems is unlikely to substantially increase the speed of the application. The only time it does lead to substantial increases is when it identifies points which were poorly designed or architected in the first place. Although to be fair I haven't worked on anything that I would consider a small system for years (perhaps 2 decades.) So that probably colors my experiences.
-
Please tell us about your experience. In some projects I worked on, there was UI code, audio code, and video code. The video code was the bottleneck. I can't imagine it making any sense to change the design of the system to fix the video code. Nor was the overall design of the video codec redesigned. Only minor implementation changes were made. And, ironically, using a goto for performance optimization is often an example of code that is less than ideal. (Note, I didn't write "always" - I can't know that because I don't know all possible situations - and sometimes that last tiny bit of performance gain does matter - but such situations are certainly extremely rare). As an aside, to make a generalization (and most generalizations are false, including this one!), the overall concern for code should be maintainability. Typically, 85% of the cost (or time) for software is maintenance, so making code easy to maintain usually should overrides all other concerns. Also, for most software, performance isn't an issue. So, if a goto is ever justified, it would have to be in a fringe case.
Bill_Hallahan wrote:
Please tell us about your experience.
My ultimate performance impact was a report that would have taken 4-12 hours to run and 3-6 weeks to implement most of which would have been optimization to get into the lower range estimate. The original performance estimate was based on the time to run an existing report so the confidence factor was high. This was due to one requirement on the report. The business person who requested it took one look at the problem requirement and promptly stated they didn't need that. After that it took 1.5 days to implement the report and it ran in a couple of seconds. In another case (in the early days of OO adoption) a senior engineer insisted on making everything an object, including in one case a value that specified via specification to be an integer in the range of 0-255. So the engineer required a class for the integer. That single request lead to a dialog box that required the user to wait each time while it was displayed. I ended up creating a cache for the required class just to speed that up.
Bill_Hallahan wrote:
And, ironically, using a goto for performance optimization is often an example of code that is less than ideal. (Note, I didn't write "always" - I can't know that because I don't know all possible situations - and sometimes that last tiny bit of performance gain does matter - but such situations are certainly extremely rare).
I agree with all of that. But that doesn't address what I was saying. I could point out that I have seen abomination designs that used classes and in fact I know I created several of those myself long ago. So misuse is misuse. It is what it is. But after that, when one has a some what ideal implementation then one might find, that to eck out just a bit more speed that one might need to modify what would otherwise be a fairly good OO design/architecture to make it less than ideal because some odd change produces just enough performance that one can deliver it.
Bill_Hallahan wrote:
Also, for most software, performance isn't an issue.
Depends on what you mean by "software". Most deliverables will have either implicit or even explicit performance requirements with respect to business processes. But most code of such a deliverables will have nothing to do with making the system faster. Nothing is more irritating than f
-
jschell wrote:
Nor can I refactor millions of lines of code every two weeks every time I figure out a "better" way to do it.
Agreed, The code I'm working on has quite a few gotos, but I don't have the time to dig through the code and lack the test cases to do a secure refactoring, so they'll remain right there, unless I find the code is broken.
jschell wrote:
And technology rationalizations are often based on nothing but technology while ignoring the realities of delivering software in a business environment.
Absolutely, the points I've made refer to creating new code, not modifiying existing one to either insert or remove gotos. My point is that you shouldn't use goto in new code, or insert it into existing code where there is no gto yet. I claim that if you see no good or at least equivalent alternative using other language constructs, then maybe you haven't looked hard enough. I willing to concede that there may be cases where there is a real benefit if using goto over any alternative, but I can't think of an example in C++, as long as you're using a decent compiler with a good optimizer that will translate alternate control statements into gotos anyway.
jschell wrote:
That would of course be an excellent argument if in fact none of the following was true.
- Maintenance was the sole and only driving business requirement.
- The business had a firm enough grasp on process control to be able to quantify maintenance costs.
- The process control was structured enough ...Maintenance doesn't need to be sole requirement and of course never is. But ignoring it would be a falacity, unless your application is supposed to be throw-away code that shouldn't be maintained (and I already said that for that kind of code all bets are off - there's no point in discussing coding guidelines at all A business not able to quantify and control maintenance cost will soon be out of business, specifically in software development). There's a reason why there are SCRUM, XP, Agile, (R)UP, etc.. Ideally process control should indeed enforce quality on the entire enterprise - that's what they're modelled to achieve! We as software developers should strive to contribute towards that goal and leave the decision whether our efforts result in small or big gains to the project leaders. In my experience, while maintenance cost is considerab
Stefan_Lang wrote:
but I can't think of an example in C++
Only time I ever saw one was in the code that implemented printf/sprintf which was part of a column in the C/C++ Users Journal. I believe the columnist made the case in the column itself why goto was used and I certainly couldn't find anything wrong with it. And at least at that point in time IO was often a significant bottleneck (and that I knew from personal experience) so slowing it down further wasn't an option. No others that I was sure of.
Stefan_Lang wrote:
Maintenance doesn't need to be sole requirement and of course never is. But ignoring it would be a falacity,
The point however is that the vast majority of development shops the actual cost of maintenance is significant because of other more serious factors. So minor corrections will have no impact.
Stefan_Lang wrote:
A business not able to quantify and control maintenance cost
Err...I have worked for and know of even more businesses that do not quantify maintenance costs and do not even attempt to track it. All still in business. They are work under the 'next release' umbrella and even that is often poorly tracked with often the development department seeming to be nothing but a black hole that money gets poured into. I worked a contract one time for a company whose software development department hadn't delivered anything for 18 months and even failed to deliver just the specs for the interface APIs that I was supposed to be working towards for more than 6 months. (Both interesting and scary to write code for an API by guessing what it might do.)
Stefan_Lang wrote:
In my experience, while maintenance cost is considerably lower per year or month compared to development, it is never minuscule
My point is that it is not measured and so is unknown. The vast number of places do not even take a minimal approach too tracking what it costs. And there are proven factors that will impact the actual cost far more than code misuse will.
Stefan_Lang wrote:
Also you shouldn't neglect the time you need to fix a bug: if you need double the time because of sloppy coding
Yes if it is taking you twice as long to fix every bug because the code has too many gotos that are completely used inco
-
Bill_Hallahan wrote:
Without using a profiler, it's usually just guesswork.
That point needs clarification. I don't deliver code. I deliver systems. The customers that use my systems don't care if a for loop is faster or not. They do care how fast the business functions of the system are though. There is no way that I can guess which piece of code is going to prove to be a bottleneck in a system. And I haven't worked with anyone that can do that either. So profiling is always required. Not to mention of course that profiling systems is unlikely to substantially increase the speed of the application. The only time it does lead to substantial increases is when it identifies points which were poorly designed or architected in the first place. Although to be fair I haven't worked on anything that I would consider a small system for years (perhaps 2 decades.) So that probably colors my experiences.
jschell: "Not to mention of course that profiling systems is unlikely to substantially increase the speed of the application. The only time it does lead to substantial increases is when it identifies points which were poorly designed or architected in the first place." I have found that profiling the code and subsequently optimizing the code can, in some cases, cause significant speedups even in a well-designed and well-implemented system. However, I don't doubt that it's not desirable to optimize the code in whatever problem domain you worked in. Some things can't be made significantly faster. And, even when code can be sped up a lot, it's not always necessary. Digital filters, which are used in audio codecs, video codecs, imaging, and sometimes graphics, can be sped up dramatically, such as running from 1.5 to 3 times faster, or rarely even faster, by writing specialized assembly routines that use parallel (SIMD) instructions. Modern Intel and AMD processor have wide registers that allow simultaneous multiples and adds of multiple numbers at the same time. These are Single Instruction Multiple Data (SIMD) instructions. Even Intel's parallel compiler cannot handle the special parallel instructions in an optimal fashion in all cases. There's still a need for hand-written assembly code in some cases. The C or C++ code to implement the algorithms is not badly designed. The compiler often can't generate the best code using these SIMD instructions today. Compilers can handle much more than in the past, but they still aren't as good as a person yet for some things. Intel now sells various routines that do mathematical operations, such as a dot-product, and even a modern video codec, and these routines use the SIMD instructions internally, so less SIMD code has to be written today. The routines don't cover all cases though. In a previous job, I wrote code to rotate an color image using bicubic filtering. I then sped up rotating the image on a page by a factor of 1.7 by writing writing specialized assembly code using SSE1 instructions to do part of the calculations. This was only part of the calculations, but that speedup was very significant and resulted in a competitive product. Not all the optimizations that I have done involved writing assembly language. For one audio application, I unrolled a loop and that caused a significant speedup. That does make the code harder to read, but I added comments. In this case, the speedup was critical to the application. For another hypothetical