Code Analysis and Code Metrics
-
I have become a big fan of code analysis and code metrics (in VS2008). Currently I am re-designing/developing a product from scratch and I am doing code analysis everyday to make sure that I am meeting guidelines correctly. I used to run FxCop every now and then in some of the previous projects but not frequently. I think running the analysis frequently is a great thing. The same thing about code metrics: trying to figure out ways to improve your code maintainability index is fun. (though I must say that VS 2008 sometime differs with me some times in what it considers to be maintainable code). I hope that VS 2008 RC and beyond improves the code metrics feature. I do think it will be a great thing for developers to write quality code. Anyone else tried code metrics? Before people jump in I want to state the obvious: these are by no menas a substitute to manual code reviews?
Co-Author ASP.NET AJAX in Action
I haven't tried the code metrics feature but I often run code analysis in VS2005. Has anyone had a problems with VS2008 and VS2005 on the same machine? I'm currently running VS2008 on a virtual machine, but it would be more convenient to run on the normal machine.
only two letters away from being an asset
-
I haven't tried the code metrics feature but I often run code analysis in VS2005. Has anyone had a problems with VS2008 and VS2005 on the same machine? I'm currently running VS2008 on a virtual machine, but it would be more convenient to run on the normal machine.
only two letters away from being an asset
Running VS2005 and VS2008 on the same machine - no problems other than my considerable frustration with the VS 2005 IDE now I've got used to the VS 2008 niceties.
'--8<------------------------ Ex Datis: Duncan Jones Merrion Computing Ltd
-
I have become a big fan of code analysis and code metrics (in VS2008). Currently I am re-designing/developing a product from scratch and I am doing code analysis everyday to make sure that I am meeting guidelines correctly. I used to run FxCop every now and then in some of the previous projects but not frequently. I think running the analysis frequently is a great thing. The same thing about code metrics: trying to figure out ways to improve your code maintainability index is fun. (though I must say that VS 2008 sometime differs with me some times in what it considers to be maintainable code). I hope that VS 2008 RC and beyond improves the code metrics feature. I do think it will be a great thing for developers to write quality code. Anyone else tried code metrics? Before people jump in I want to state the obvious: these are by no menas a substitute to manual code reviews?
Co-Author ASP.NET AJAX in Action
-
Rama Krishna Vavilala wrote:
Anyone else tried code metrics?
I am dubious (would you expect anything else?) of algorithms that somehow glean from the code how maintainable it is, other than fairly simple things like "you should use an interface", "more comments", and "use properties instead of fields". OTOH, it would be a gas running FxCop and code metrics on Interacx! Marc
-
I have become a big fan of code analysis and code metrics (in VS2008). Currently I am re-designing/developing a product from scratch and I am doing code analysis everyday to make sure that I am meeting guidelines correctly. I used to run FxCop every now and then in some of the previous projects but not frequently. I think running the analysis frequently is a great thing. The same thing about code metrics: trying to figure out ways to improve your code maintainability index is fun. (though I must say that VS 2008 sometime differs with me some times in what it considers to be maintainable code). I hope that VS 2008 RC and beyond improves the code metrics feature. I do think it will be a great thing for developers to write quality code. Anyone else tried code metrics? Before people jump in I want to state the obvious: these are by no menas a substitute to manual code reviews?
Co-Author ASP.NET AJAX in Action
I have used code metrics in the past and they can provide a lot of good information to help you meet best practices. I think the metrics that are available in VS2008 are good, but they won't be available in anything but the Team Edition versions and aren't (currently) extensible to add your own metrics.
Scott.
—In just two days, tomorrow will be yesterday. [Forum Guidelines] [Articles] [Blog]
-
Rama Krishna Vavilala wrote:
Anyone else tried code metrics?
I am dubious (would you expect anything else?) of algorithms that somehow glean from the code how maintainable it is, other than fairly simple things like "you should use an interface", "more comments", and "use properties instead of fields". OTOH, it would be a gas running FxCop and code metrics on Interacx! Marc
It does work, provided you don't go overboard and have someone that really understands what the metrics mean. In some ways, it's not much different than function point counting to do project estimates.
Marc Clifton wrote:
fairly simple things like "you should use an interface", "more comments", and "use properties instead of fields".
Some of the metrics are that simple, and measure things like the percentage of comments to code, etc. Others show things like how tightly coupled your objects are (how many inter-dependencies things have), the ratio of abstract to concrete class, the number of interfaces to classes, the eaverage size of classes and methods (which is generally termed complexity), etc. - all of which generally have a direct impact on the maintainability.
Scott.
—In just two days, tomorrow will be yesterday. [Forum Guidelines] [Articles] [Blog]
-
I haven't tried the code metrics feature but I often run code analysis in VS2005. Has anyone had a problems with VS2008 and VS2005 on the same machine? I'm currently running VS2008 on a virtual machine, but it would be more convenient to run on the normal machine.
only two letters away from being an asset
Mark Nischalke wrote:
Has anyone had a problems with VS2008 and VS2005 on the same machine? I'm currently running VS2008 on a virtual machine, but it would be more convenient to run on the normal machine.
I have had no problems running them on the same system. I have, however, started using VS2008 almost exclusively.
Scott.
—In just two days, tomorrow will be yesterday. [Forum Guidelines] [Articles] [Blog]
-
I have become a big fan of code analysis and code metrics (in VS2008). Currently I am re-designing/developing a product from scratch and I am doing code analysis everyday to make sure that I am meeting guidelines correctly. I used to run FxCop every now and then in some of the previous projects but not frequently. I think running the analysis frequently is a great thing. The same thing about code metrics: trying to figure out ways to improve your code maintainability index is fun. (though I must say that VS 2008 sometime differs with me some times in what it considers to be maintainable code). I hope that VS 2008 RC and beyond improves the code metrics feature. I do think it will be a great thing for developers to write quality code. Anyone else tried code metrics? Before people jump in I want to state the obvious: these are by no menas a substitute to manual code reviews?
Co-Author ASP.NET AJAX in Action
Rama Krishna Vavilala wrote:
Before people jump in I want to state the obvious: these are by no menas a substitute to manual code reviews?
Absolutely agree. With code analysis tools we may even achieve 20%-30% only. It gives a lot of false positives. We are using a different product in our environment and it is geared more towards security Though running analysis and finding issues are great, sometimes it is tough to enforce fixing some of the issues (because of different priorities for different groups, tight deadline, - all usual things :)) In that case we may have to choose the lesser evil.. Thanks, Madhu.
-
It does work, provided you don't go overboard and have someone that really understands what the metrics mean. In some ways, it's not much different than function point counting to do project estimates.
Marc Clifton wrote:
fairly simple things like "you should use an interface", "more comments", and "use properties instead of fields".
Some of the metrics are that simple, and measure things like the percentage of comments to code, etc. Others show things like how tightly coupled your objects are (how many inter-dependencies things have), the ratio of abstract to concrete class, the number of interfaces to classes, the eaverage size of classes and methods (which is generally termed complexity), etc. - all of which generally have a direct impact on the maintainability.
Scott.
—In just two days, tomorrow will be yesterday. [Forum Guidelines] [Articles] [Blog]
Scott Dorman wrote:
Others show things like ...
So the question I always have when I've run this kind of analysis is, how do I know what's good and what's bad? In other words, what do I do with all that information, so that I can then determine what needs to be *cough* refactored? Marc
-
Scott Dorman wrote:
Others show things like ...
So the question I always have when I've run this kind of analysis is, how do I know what's good and what's bad? In other words, what do I do with all that information, so that I can then determine what needs to be *cough* refactored? Marc
That's always the issue. There are a ton of metrics available and, while they all mean something, some are more important than others. There is a pretty good explanation of different metrics available through NDepend, but the definitions are fairly generic: http://www.ndepend.com/Metrics.aspx[^] The ones I have used in the past, and generally consider to be the "primary" metrics are: Relational Cohesion (H): average number of internal relationships per type. Efferent coupling (Ce): number of types within the class that depend on types outside the class Afferent coupling (Ca): number of types outside the class that depend on types within the class Cyclomatic Complexity (CC): number of decisions that can be taken in a procedure Instability (I): ratio of efferent coupling to total coupling, which indicates the classes resilience to change Scott Hanselman also has a good post[^] explaining some of the metrics as well.
Scott.
—In just two days, tomorrow will be yesterday. [Forum Guidelines] [Articles] [Blog]
-
That's always the issue. There are a ton of metrics available and, while they all mean something, some are more important than others. There is a pretty good explanation of different metrics available through NDepend, but the definitions are fairly generic: http://www.ndepend.com/Metrics.aspx[^] The ones I have used in the past, and generally consider to be the "primary" metrics are: Relational Cohesion (H): average number of internal relationships per type. Efferent coupling (Ce): number of types within the class that depend on types outside the class Afferent coupling (Ca): number of types outside the class that depend on types within the class Cyclomatic Complexity (CC): number of decisions that can be taken in a procedure Instability (I): ratio of efferent coupling to total coupling, which indicates the classes resilience to change Scott Hanselman also has a good post[^] explaining some of the metrics as well.
Scott.
—In just two days, tomorrow will be yesterday. [Forum Guidelines] [Articles] [Blog]
Scott Dorman wrote:
There is a pretty good explanation of different metrics available through NDepend
Interesting. Thanks for the links!
Scott Dorman wrote:
Cyclomatic Complexity (CC): number of decisions that can be taken in a procedure
Ha! I recently wrote a decision tree editor and parser because I noticed that my code was getting incredibly complex in managing the decisions regarding the user's request to do something vs. the ability to do that and the alternatives if not all data was available (this is for a streaming AVI player similar to TiVo). The branches were getting so complex that I isolated: 1) the information required to make decisions 2) the decision graph itself The code feeds all the information to the parser and the decision graph is expressed in XML with callbacks to the code possible at each decision point. It made managing the decision tree incredibly easier (and alterable without recompiling the code), it made it easier to test, and my spaghetti code was completely eliminated. Instead I ended up with very small methods for determining the information and very small methods for acting on the decisions. I still have to write an article about that one! Marc
-
Scott Dorman wrote:
There is a pretty good explanation of different metrics available through NDepend
Interesting. Thanks for the links!
Scott Dorman wrote:
Cyclomatic Complexity (CC): number of decisions that can be taken in a procedure
Ha! I recently wrote a decision tree editor and parser because I noticed that my code was getting incredibly complex in managing the decisions regarding the user's request to do something vs. the ability to do that and the alternatives if not all data was available (this is for a streaming AVI player similar to TiVo). The branches were getting so complex that I isolated: 1) the information required to make decisions 2) the decision graph itself The code feeds all the information to the parser and the decision graph is expressed in XML with callbacks to the code possible at each decision point. It made managing the decision tree incredibly easier (and alterable without recompiling the code), it made it easier to test, and my spaghetti code was completely eliminated. Instead I ended up with very small methods for determining the information and very small methods for acting on the decisions. I still have to write an article about that one! Marc
Marc Clifton wrote:
Thanks for the links!
You're welcome.
Marc Clifton wrote:
I recently wrote a decision tree editor and parser because I noticed that my code was getting incredibly complex
Yes, this sounds like it would have been a great example for testing some of the different metrics. If you still have the older version of the code, it might be a good way to understand what the different metrics are telling you...run them once on the old code and then again on the new code and compare the differences.
Marc Clifton wrote:
I still have to write an article about that one!
Sounds like it will be a good article.
Scott.
—In just two days, tomorrow will be yesterday. [Forum Guidelines] [Articles] [Blog]
-
Scott Dorman wrote:
There is a pretty good explanation of different metrics available through NDepend
Interesting. Thanks for the links!
Scott Dorman wrote:
Cyclomatic Complexity (CC): number of decisions that can be taken in a procedure
Ha! I recently wrote a decision tree editor and parser because I noticed that my code was getting incredibly complex in managing the decisions regarding the user's request to do something vs. the ability to do that and the alternatives if not all data was available (this is for a streaming AVI player similar to TiVo). The branches were getting so complex that I isolated: 1) the information required to make decisions 2) the decision graph itself The code feeds all the information to the parser and the decision graph is expressed in XML with callbacks to the code possible at each decision point. It made managing the decision tree incredibly easier (and alterable without recompiling the code), it made it easier to test, and my spaghetti code was completely eliminated. Instead I ended up with very small methods for determining the information and very small methods for acting on the decisions. I still have to write an article about that one! Marc
I would like to read it!
Todd Smith
-
Mark Nischalke wrote:
Has anyone had a problems with VS2008 and VS2005 on the same machine? I'm currently running VS2008 on a virtual machine, but it would be more convenient to run on the normal machine.
I have had no problems running them on the same system. I have, however, started using VS2008 almost exclusively.
Scott.
—In just two days, tomorrow will be yesterday. [Forum Guidelines] [Articles] [Blog]
Ran into a problem on one box where the PATH environment variable hit a limit with both 2005 and 2008 installed (wtf is there a limit in this day and age).
Todd Smith
-
Ran into a problem on one box where the PATH environment variable hit a limit with both 2005 and 2008 installed (wtf is there a limit in this day and age).
Todd Smith
Todd Smith wrote:
Ran into a problem on one box where the PATH environment variable hit a limit with both 2005 and 2008 installed (wtf is there a limit in this day and age).
I don't think that's actually an issue with Visual Studio, but an issue with the underlying Win32 file APIs. The limit is there mostly because those APIs are still legacy code and the underlying file system structures haven't changed in forever. If Microsoft ever moved away from FAT and NTFS based file systems and implemented something like ZFS the world would be a much simpler place.
Scott.
—In just two days, tomorrow will be yesterday. [Forum Guidelines] [Articles] [Blog]