Objective measures of code quality?
-
I know there are a number of metrics of code quality (such as cyclomatic complexity[^] etc.) but is there any reference for what the range is typical of / acceptable for real-world applications? Alternatively any references (do tools like dotTest[^] make a difference?) Yes - this question is related to my need to dig myself out of a big ball of mud[^]... EDIT 1: I found a reference to NIST Special Publication 500-235 [^] which seems like a good place to start.
-
I know there are a number of metrics of code quality (such as cyclomatic complexity[^] etc.) but is there any reference for what the range is typical of / acceptable for real-world applications? Alternatively any references (do tools like dotTest[^] make a difference?) Yes - this question is related to my need to dig myself out of a big ball of mud[^]... EDIT 1: I found a reference to NIST Special Publication 500-235 [^] which seems like a good place to start.
Duncan Edwards Jones wrote:
but is there any reference for what the range is typical of / acceptable for real-world applications?
This is totally subjective. What one person thinks is acceptable could be very much unacceptable to someone else. One good measure of code quality is the level of defects. How much time do you spend on bug fixes versus new features?
If it's not broken, fix it until it is
-
I know there are a number of metrics of code quality (such as cyclomatic complexity[^] etc.) but is there any reference for what the range is typical of / acceptable for real-world applications? Alternatively any references (do tools like dotTest[^] make a difference?) Yes - this question is related to my need to dig myself out of a big ball of mud[^]... EDIT 1: I found a reference to NIST Special Publication 500-235 [^] which seems like a good place to start.
WTFs/minute[^] of course! :D
Visit my blog at Sander's bits - Writing the code you need. Or read my articles at my CodeProject profile.
Simplicity is prerequisite for reliability. — Edsger W. Dijkstra
Regards, Sander
-
I know there are a number of metrics of code quality (such as cyclomatic complexity[^] etc.) but is there any reference for what the range is typical of / acceptable for real-world applications? Alternatively any references (do tools like dotTest[^] make a difference?) Yes - this question is related to my need to dig myself out of a big ball of mud[^]... EDIT 1: I found a reference to NIST Special Publication 500-235 [^] which seems like a good place to start.
It the code performs its intended function with efficient, easy-to-understand and maintain code, it's of high quality.
#SupportHeForShe
If your actions inspire others to dream more, learn more, do more and become more, you are a leader.-John Q. Adams You must accept 1 of 2 basic premises: Either we are alone in the universe or we are not alone. Either way, the implications are staggering!-Wernher von Braun Only 2 things are infinite, the universe and human stupidity, and I'm not sure about the former.-Albert Einstein
-
Duncan Edwards Jones wrote:
but is there any reference for what the range is typical of / acceptable for real-world applications?
This is totally subjective. What one person thinks is acceptable could be very much unacceptable to someone else. One good measure of code quality is the level of defects. How much time do you spend on bug fixes versus new features?
If it's not broken, fix it until it is
Kevin Marois wrote:
One good measure of code quality is the level of defects. How much time do you spend on bug fixes versus new features?
:thumbsup::thumbsup:
#SupportHeForShe
If your actions inspire others to dream more, learn more, do more and become more, you are a leader.-John Q. Adams You must accept 1 of 2 basic premises: Either we are alone in the universe or we are not alone. Either way, the implications are staggering!-Wernher von Braun Only 2 things are infinite, the universe and human stupidity, and I'm not sure about the former.-Albert Einstein
-
WTFs/minute[^] of course! :D
Visit my blog at Sander's bits - Writing the code you need. Or read my articles at my CodeProject profile.
Simplicity is prerequisite for reliability. — Edsger W. Dijkstra
Regards, Sander
:thumbsup: :laugh: But when calculating WTF/minute you must not forget to account for the possibility of extended recreational breaks after each WTF (points at himself)
-
I know there are a number of metrics of code quality (such as cyclomatic complexity[^] etc.) but is there any reference for what the range is typical of / acceptable for real-world applications? Alternatively any references (do tools like dotTest[^] make a difference?) Yes - this question is related to my need to dig myself out of a big ball of mud[^]... EDIT 1: I found a reference to NIST Special Publication 500-235 [^] which seems like a good place to start.
Duncan Edwards Jones wrote:
but is there any reference for what the range is typical of / acceptable for real-world applications?
Most companies that I worked for would oppose sharing such information, assuming that someone had it. I find it useless in terms of determining entire applications; but if you take a look at sections of your code, you might find places where it goes up sharply where you might not expect it. In terms of entire applications the amount of possible paths can go up hugely with much impact; think of adding another addin that saves the current document in "just another format". You might indeed want to include the bug-count, LOC, avg L/method, amount of types, amount of namespaces, amount of violations of FxCop, amount of compiler-warnings, and profile things such as speed and memory-usage. That also makes those numbers rather project- and team-related. Speaking of the subject, would be nice to have some of those calculated for the articles here :)
Bastard Programmer from Hell :suss: If you can't read my code, try converting it here[^][](X-Clacks-Overhead: GNU Terry Pratchett)
-
I know there are a number of metrics of code quality (such as cyclomatic complexity[^] etc.) but is there any reference for what the range is typical of / acceptable for real-world applications? Alternatively any references (do tools like dotTest[^] make a difference?) Yes - this question is related to my need to dig myself out of a big ball of mud[^]... EDIT 1: I found a reference to NIST Special Publication 500-235 [^] which seems like a good place to start.
If your big ball of mud is anywhere close to the size of my big ball of mud I would recommend rebuilding the app from scratch :-\ :rolleyes:
-
I know there are a number of metrics of code quality (such as cyclomatic complexity[^] etc.) but is there any reference for what the range is typical of / acceptable for real-world applications? Alternatively any references (do tools like dotTest[^] make a difference?) Yes - this question is related to my need to dig myself out of a big ball of mud[^]... EDIT 1: I found a reference to NIST Special Publication 500-235 [^] which seems like a good place to start.
As others have said, quality is subjective. The most objective ways to measure are using the existing tools that have been created: Memory Leak Testing: - Valgrind Performance Tuning: - Cachegrind - Callgrind - The profiling tools in Visual Studio There are a number of static analysis tools: Klocwork: You configure the tool with a coding standard, such as JSF++, MISRA, or your own custom rules and it analyzes for potential issues. Lattix: Evaluates the coupling of the different modules and reports how modularly your code is organized. Lines of code is useful if you combine that information with other statistics that you maintain, such as the number of defects, the volatility of the code for particular modules and the amount of time a developer spends modifying code in those modules. Tools can help you identify issues, and sometimes even point towards possible solutions. However, I have mostly witnessed people expecting to run the tools, and like a magic wand everything is fixed. Tools cannot fix a social problem. Ultimately, the value you get from the tools is related to how much time you want to invest in learning them and effectively using them.
-
Duncan Edwards Jones wrote:
but is there any reference for what the range is typical of / acceptable for real-world applications?
This is totally subjective. What one person thinks is acceptable could be very much unacceptable to someone else. One good measure of code quality is the level of defects. How much time do you spend on bug fixes versus new features?
If it's not broken, fix it until it is
Don't you mean: How much time do you should spend on bug fixes versus new features? :sigh:
Wrong is evil and must be defeated. - Jeff Ello
-
If your big ball of mud is anywhere close to the size of my big ball of mud I would recommend rebuilding the app from scratch :-\ :rolleyes:
Every developer anywhere always wants to build it again from scratch ;) Actually, I just did for a customers project... I plead guilty :-O
Visit my blog at Sander's bits - Writing the code you need. Or read my articles at my CodeProject profile.
Simplicity is prerequisite for reliability. — Edsger W. Dijkstra
Regards, Sander
-
I know there are a number of metrics of code quality (such as cyclomatic complexity[^] etc.) but is there any reference for what the range is typical of / acceptable for real-world applications? Alternatively any references (do tools like dotTest[^] make a difference?) Yes - this question is related to my need to dig myself out of a big ball of mud[^]... EDIT 1: I found a reference to NIST Special Publication 500-235 [^] which seems like a good place to start.
I would actually be interested in an experiment (maybe a hackathon) where everyone was asked to write the same code, people would then look at the code and judge it by quality, and then apply these metric tools to see if they correlate. Or, I could take some C# code that some other idiot wrote and benchmark it against my cleaned up version and see what the difference is. That would also be interested, to see how the numbers vary. Might even apply it to some code I have where I'm the idiot. :) Marc
Imperative to Functional Programming Succinctly Contributors Wanted for Higher Order Programming Project!
-
I know there are a number of metrics of code quality (such as cyclomatic complexity[^] etc.) but is there any reference for what the range is typical of / acceptable for real-world applications? Alternatively any references (do tools like dotTest[^] make a difference?) Yes - this question is related to my need to dig myself out of a big ball of mud[^]... EDIT 1: I found a reference to NIST Special Publication 500-235 [^] which seems like a good place to start.
I'm a great fan of KLOCs. That's a measure of the purity of code per thousand litres of coffee.
-
I would actually be interested in an experiment (maybe a hackathon) where everyone was asked to write the same code, people would then look at the code and judge it by quality, and then apply these metric tools to see if they correlate. Or, I could take some C# code that some other idiot wrote and benchmark it against my cleaned up version and see what the difference is. That would also be interested, to see how the numbers vary. Might even apply it to some code I have where I'm the idiot. :) Marc
Imperative to Functional Programming Succinctly Contributors Wanted for Higher Order Programming Project!
I'd personally have no problem at all running the standard .NET code metrics against all my code on this site.... could be an interesting exercise.
-
I'd personally have no problem at all running the standard .NET code metrics against all my code on this site.... could be an interesting exercise.
Duncan Edwards Jones wrote:
I'd personally have no problem at all running the standard .NET code metrics against all my code on this site.... could be an interesting exercise.
Exactly, but what I want is a more objective understanding of how the metric changes when I "improve" the code. Maybe even taking a small piece of code and applying, say, some basic design patterns to it, would be interesting. What I would find a lot more useful in these metrics is if the code analyzer could actually say "here the design pattern foo is being applied which is good" and "here, design pattern fizbin ought to be applied." Now that might be interesting! Marc
Imperative to Functional Programming Succinctly Contributors Wanted for Higher Order Programming Project!
-
Every developer anywhere always wants to build it again from scratch ;) Actually, I just did for a customers project... I plead guilty :-O
Visit my blog at Sander's bits - Writing the code you need. Or read my articles at my CodeProject profile.
Simplicity is prerequisite for reliability. — Edsger W. Dijkstra
Regards, Sander
If you're saying that it could be just my subjective perception that my application is a super sized ball of mud that can't be rescued but only be replaced by a whole new solution - then you're lucky because I won't show you the source because I don't want to be held liable for your mental state :-D
-
If you're saying that it could be just my subjective perception that my application is a super sized ball of mud that can't be rescued but only be replaced by a whole new solution - then you're lucky because I won't show you the source because I don't want to be held liable for your mental state :-D
Sascha Lefévre wrote:
your mental state
Like you need to worry about that[^] :)
Visit my blog at Sander's bits - Writing the code you need. Or read my articles at my CodeProject profile.
Simplicity is prerequisite for reliability. — Edsger W. Dijkstra
Regards, Sander
-
I would actually be interested in an experiment (maybe a hackathon) where everyone was asked to write the same code, people would then look at the code and judge it by quality, and then apply these metric tools to see if they correlate. Or, I could take some C# code that some other idiot wrote and benchmark it against my cleaned up version and see what the difference is. That would also be interested, to see how the numbers vary. Might even apply it to some code I have where I'm the idiot. :) Marc
Imperative to Functional Programming Succinctly Contributors Wanted for Higher Order Programming Project!
Marc Clifton wrote:
I would actually be interested in an experiment (maybe a hackathon)
Up to this point I hoped you would propose a contest for the ugliest piece of code - I would already have won it.. :-D :-O /Sascha
-
Sascha Lefévre wrote:
your mental state
Like you need to worry about that[^] :)
Visit my blog at Sander's bits - Writing the code you need. Or read my articles at my CodeProject profile.
Simplicity is prerequisite for reliability. — Edsger W. Dijkstra
Regards, Sander
I know there's nothing left to speak of but I fear your family could smell a chance and put the blame on me :laugh: :rose:
-
Duncan Edwards Jones wrote:
I'd personally have no problem at all running the standard .NET code metrics against all my code on this site.... could be an interesting exercise.
Exactly, but what I want is a more objective understanding of how the metric changes when I "improve" the code. Maybe even taking a small piece of code and applying, say, some basic design patterns to it, would be interesting. What I would find a lot more useful in these metrics is if the code analyzer could actually say "here the design pattern foo is being applied which is good" and "here, design pattern fizbin ought to be applied." Now that might be interesting! Marc
Imperative to Functional Programming Succinctly Contributors Wanted for Higher Order Programming Project!
Marc Clifton wrote:
"here, design pattern fizbin dustbin ought to be applied."
FTFY