Objective measures of code quality?
-
Sascha Lefévre wrote:
your mental state
Like you need to worry about that[^] :)
Visit my blog at Sander's bits - Writing the code you need. Or read my articles at my CodeProject profile.
Simplicity is prerequisite for reliability. — Edsger W. Dijkstra
Regards, Sander
I know there's nothing left to speak of but I fear your family could smell a chance and put the blame on me :laugh: :rose:
-
Duncan Edwards Jones wrote:
I'd personally have no problem at all running the standard .NET code metrics against all my code on this site.... could be an interesting exercise.
Exactly, but what I want is a more objective understanding of how the metric changes when I "improve" the code. Maybe even taking a small piece of code and applying, say, some basic design patterns to it, would be interesting. What I would find a lot more useful in these metrics is if the code analyzer could actually say "here the design pattern foo is being applied which is good" and "here, design pattern fizbin ought to be applied." Now that might be interesting! Marc
Imperative to Functional Programming Succinctly Contributors Wanted for Higher Order Programming Project!
Marc Clifton wrote:
"here, design pattern fizbin dustbin ought to be applied."
FTFY
-
If your big ball of mud is anywhere close to the size of my big ball of mud I would recommend rebuilding the app from scratch :-\ :rolleyes:
Let me just leave the opinion[^] from Joel Spolsky here shall I. :-\
Wrong is evil and must be defeated. - Jeff Ello
-
Let me just leave the opinion[^] from Joel Spolsky here shall I. :-\
Wrong is evil and must be defeated. - Jeff Ello
Thank you for the link, Jörgen - an interesting read! And it probably applies to a lot of "those cases". If you're not concerned about your peace of mind I'll show you the source of my old program and you will acknowledge that there are exceptions, or, at least one :-\ /Sascha
-
I'd personally have no problem at all running the standard .NET code metrics against all my code on this site.... could be an interesting exercise.
Duncan Edwards Jones wrote:
I'd personally have no problem at all running the standard .NET code metrics against all my code on this site....
Maybe the hamsters would have a problem. Requires a lot of excercise for the code that is currently available. Some might even die in the achievement. I'm not sure whether that is morally justifiable.
Bastard Programmer from Hell :suss: If you can't read my code, try converting it here[^][](X-Clacks-Overhead: GNU Terry Pratchett)
-
Duncan Edwards Jones wrote:
but is there any reference for what the range is typical of / acceptable for real-world applications?
This is totally subjective. What one person thinks is acceptable could be very much unacceptable to someone else. One good measure of code quality is the level of defects. How much time do you spend on bug fixes versus new features?
If it's not broken, fix it until it is
Kevin Marois wrote:
One good measure of code quality is the level of defects.
I don't quite agree, the level of defects measures, first and foremost, the quality of your QA. The quality of your code s only a secondary factor. You won't spend much time on fixes if you're not aware of the bugs in the first place.
Kevin Marois wrote:
How much time do you spend on bug fixes versus new features?
Comparing the time spent on fixes to the time spent on new features is like comparing plums to peaches: they may be similar at the core, but it's a different flavor! The skill to find and fix errors is quite different to the one required for designing a new program or function. I'll give you though that there is a correlation that grows with the experience of the programmer.
GOTOs are a bit like wire coat hangers: they tend to breed in the darkness, such that where there once were few, eventually there are many, and the program's architecture collapses beneath them. (Fran Poretto)
-
Kevin Marois wrote:
One good measure of code quality is the level of defects.
I don't quite agree, the level of defects measures, first and foremost, the quality of your QA. The quality of your code s only a secondary factor. You won't spend much time on fixes if you're not aware of the bugs in the first place.
Kevin Marois wrote:
How much time do you spend on bug fixes versus new features?
Comparing the time spent on fixes to the time spent on new features is like comparing plums to peaches: they may be similar at the core, but it's a different flavor! The skill to find and fix errors is quite different to the one required for designing a new program or function. I'll give you though that there is a correlation that grows with the experience of the programmer.
GOTOs are a bit like wire coat hangers: they tend to breed in the darkness, such that where there once were few, eventually there are many, and the program's architecture collapses beneath them. (Fran Poretto)
Stefan_Lang wrote:
I don't quite agree, the level of defects measures, first and foremost, the quality of your QA. The quality of your code s only a secondary factor. You won't spend much time on fixes if you're not aware of the bugs in the first place.
I find developers (me included) are quick to blame QA for mistakes and code quality. We need to take a reasonable amount of fault for code that has defects. I know that I have a good idea what the code I'm working is supposed to do and should do my best to account for potential issues that come up. That is what we're paid for. I find QA is best for double checking work and usability/flow of the software written. Hogan
-
Stefan_Lang wrote:
I don't quite agree, the level of defects measures, first and foremost, the quality of your QA. The quality of your code s only a secondary factor. You won't spend much time on fixes if you're not aware of the bugs in the first place.
I find developers (me included) are quick to blame QA for mistakes and code quality. We need to take a reasonable amount of fault for code that has defects. I know that I have a good idea what the code I'm working is supposed to do and should do my best to account for potential issues that come up. That is what we're paid for. I find QA is best for double checking work and usability/flow of the software written. Hogan
You are mistaking me - I do in no way blame QA when there are many bugs. Every program has bugs, even high quality code. So if there are no bugs (that you know of) it means QA needs to test more! In other words: lots of bugs is a sign QA is working well. And this applies to both good and bad code. Thus my argument is that the lack of bugs doesn't necessarily imply good code, just that you haven't found those bugs yet.
GOTOs are a bit like wire coat hangers: they tend to breed in the darkness, such that where there once were few, eventually there are many, and the program's architecture collapses beneath them. (Fran Poretto)
-
I would actually be interested in an experiment (maybe a hackathon) where everyone was asked to write the same code, people would then look at the code and judge it by quality, and then apply these metric tools to see if they correlate. Or, I could take some C# code that some other idiot wrote and benchmark it against my cleaned up version and see what the difference is. That would also be interested, to see how the numbers vary. Might even apply it to some code I have where I'm the idiot. :) Marc
Imperative to Functional Programming Succinctly Contributors Wanted for Higher Order Programming Project!
-
I know there are a number of metrics of code quality (such as cyclomatic complexity[^] etc.) but is there any reference for what the range is typical of / acceptable for real-world applications? Alternatively any references (do tools like dotTest[^] make a difference?) Yes - this question is related to my need to dig myself out of a big ball of mud[^]... EDIT 1: I found a reference to NIST Special Publication 500-235 [^] which seems like a good place to start.
I don't care about objective measures of code that make me want to SUBJECTIVELY PUKE when I work on it. Red/Green/Refactor your way to happiness... Make this commitment: Every file you touch to fix a bug, you will cleanup. You will at least make it 10% better. And using the testing along the way. try to fix as many bugs in one file as you can. Eventually you will make your way through all of the files. One must eat an elephant in bite sized pieces :-) The sheer pride of knowing you are chipping away at the Technical Debt should carry you forward. I will promise you, in a years time, you will look back and laugh!
-
I don't care about objective measures of code that make me want to SUBJECTIVELY PUKE when I work on it. Red/Green/Refactor your way to happiness... Make this commitment: Every file you touch to fix a bug, you will cleanup. You will at least make it 10% better. And using the testing along the way. try to fix as many bugs in one file as you can. Eventually you will make your way through all of the files. One must eat an elephant in bite sized pieces :-) The sheer pride of knowing you are chipping away at the Technical Debt should carry you forward. I will promise you, in a years time, you will look back and laugh!
Sadly, if the coupling level is too high to allow for the creation of unit tests then "red/green/refactor" is as scary as having open heart surgery on a roller coaster.
-
Duncan Edwards Jones wrote:
but is there any reference for what the range is typical of / acceptable for real-world applications?
This is totally subjective. What one person thinks is acceptable could be very much unacceptable to someone else. One good measure of code quality is the level of defects. How much time do you spend on bug fixes versus new features?
If it's not broken, fix it until it is
Kevin Marois wrote:
One good measure of code quality is the level of defects. How much time do you spend on bug fixes versus new features?
I disagree, neither of these are a good measure. A mudball codebase will usually cost an inordinate amount of time to add new features, even small, simple ones. Fixing bugs may actually be easier (but often cause more bugs elsewhere in the code). Since there's always more bugs than can get fixed, the time spent fixing bugs and adding new features is often dictated by management.
We can program with only 1's, but if all you've got are zeros, you've got nothing.
-
I know there are a number of metrics of code quality (such as cyclomatic complexity[^] etc.) but is there any reference for what the range is typical of / acceptable for real-world applications? Alternatively any references (do tools like dotTest[^] make a difference?) Yes - this question is related to my need to dig myself out of a big ball of mud[^]... EDIT 1: I found a reference to NIST Special Publication 500-235 [^] which seems like a good place to start.
It would help if there was a definition of code quality, but there isn't even that. Code might be defect-free (in the sense of working as designed), but still not fit for use. Code could be fit-for-use, but so complex as to be unmaintainable. There are numerous dimensions of that slippery concept called "quality". Cyclomatic complexity doesn't measure code quality, it measures code complexity. The hypothesis is that there is a positive correlation between complexity and defect density. Most people think of measures of defects when they think about quality, but defects are not the only or even necessarily best measure of quality. Defects have the advantage that we can detect them, count them, and graph them on charts. Dimensions like maintainability are pretty squishy. And dimensions like fitness for use are measureable in principle, but so expensive to measure that most teams don't bother (and more's the pity). With cyclomatic complexity, what you do is look for the parts of your code with high complexity and test the snot out of those parts, because bugs lurk there. You can also use cyclomatic complexity as a measure of where to focus refactoring and abstracting efforts. A team might go so far as to require special review of any code checked in whose cyclomatic complexity exceeded a particular value. But there's no number that is "too bug". Maybe the problem is just that hard.