What is the fascination with code metrics?

12,006

Solution 1

The answers in this thread are kind of odd as they speak of:

  • "the team", like "the one and only beneficiary" of those said metrics;
  • "the metrics", like they mean anything in themselves.

1/ Metrics is not for one population, but for three:

  • developers: they are concerned with instantaneous static code metrics regarding static analysis of their code (cyclomatic complexity, comments quality, number of lines, ...)
  • project leaders: they are concerned with daily live code metrics coming from unit test, code coverage, continuous integration testing
  • business sponsors (they are always forgotten, but they are the stakeholders, the one paying for the development): they are concerned with weekly global code metrics regarding architectural design, security, dependencies, ...

All those metrics can be watched and analyzed by all three populations of course, but each kind is designed to be better used by each specific group.

2/ Metrics, by themselves, represent a snapshot of the code, and that means... nothing!

It is the combination of those metrics, and the combinations of those different levels of analysis that may indicate a "good" or "bad" code, but more importantly, it is the trend of those metrics that is significant.

That is the repetition of those metrics what will give the real added value, as they will help the business managers/project leaders/developers to prioritize amongst the different possible code fixes


In other words, your question about the "fascination of metrics" could refer to the difference between:

  • "beautiful" code (although that is always in the eye of the beholder-coder)
  • "good" code (which works, and can prove it works)

So, for instance, a function with a cyclomatic complexity of 9 could be defined as "beautiful", as opposed of one long convoluted function of cyclomatic complexity of 42.

BUT, if:

  • the latter function has a steady complexity, combined with a code coverage of 95%,
  • whereas the former has an increasing complexity, combined with a coverage of... 0%,

one could argue:

  • the the latter represents a "good" code (it works, it is stable, and if it need to change, one can checks if it still works after modifications),
  • the former is a "bad" code (it still need to add some cases and conditions to cover all it has to do, and there is no easy way to make some regression test)

So, to summarize:

a single metric that by itself always indicates [...]

: not much, except that the code may be more "beautiful", which in itself does not mean a lot...

Is there some magical insight to be gained from code metrics that I've overlooked?

Only the combination and trend of metrics give the real "magical insight" you are after.

Solution 2

I had a project that I did as a one person job measured for cyclomatic complexity some month ago. That was my first exposure to these kind of metrics.

The first report I got was shocking. Almost all of my functions failed the test, even the (imho) very simple ones. I got around the complexity thing by moving logical sub-task into subroutines even if they have been called only once.

For the other half of the routines my pride as a programmer kicked in and I tried to rewrite them in a way that they do the same, just simpler and more readable. That worked and I was able to get most down to the customers yclomatic complexity threshold.

In the end I was almost always able to come up with a better solution and much cleaner code. The performance did not suffered from this (trust me - I'm paranoid on this, and I check the disassembly of the compiler output quite often).

I think metrics are a good thing if you use them as a reason/motivation to improve your code. It's imortant to know when to stop and ask for a metric violation grant though.

Metrics are guides and helps, not ends in itself.

Solution 3

The best metric that I have ever used is the C.R.A.P. score.

Basically it's an algorithm that compares weighted cyclomatic complexity with automated test coverage. The algorithm looks like this: CRAP(m) = comp(m)^2 * (1 – cov(m)/100)^3 + comp(m) where comp(m) is the cyclomatic complexity of method m, and cov(m) is the test code coverage provided by automated tests.

The authors of the afore mentioned article (please, go read it...it's well worth your time) suggest a max C.R.A.P. score of 30 which breaks down in the following way:

Method’s Cyclomatic Complexity        % of coverage required to be
                                      below CRAPpy threshold
------------------------------        --------------------------------
0 – 5                                   0%
10                                     42%
15                                     57%
20                                     71%
25                                     80%
30                                    100%
31+                                   No amount of testing will keep methods
                                      this complex out of CRAP territory.

As you quickly see, the metric rewards writing code that is not complex coupled with good test coverage (if you are writing unit tests, and you should be, and are not measuring coverage...well, you probably would enjoy spitting into the wind as well). ;-)

For most of my development teams I tried really hard to get the C.R.A.P. score below 8, but if they had valid reasons to justify the added complexity that was acceptable as long as they covered the complexity with sufficient tests. (Writing complex code is always very difficult to test...kind of a hidden benefit to this metric).

Most people found it hard initially to write code that would pass the C.R.A.P. score. But over time they wrote better code, code that had fewer problems, and code that was a lot easier to debug. Out of any metric, this is the one that has the fewest concerns and the greatest benefit.

Solution 4

For me the single most important metric that identifies bad code is cyclomatic complexity. Almost all methods in my projects are below CC 10 and bugs are invariably found in legacy methods with CC over 30. High CC usually indicates:

  • code written in haste (ie. there was no time to find an elegant solution and not because the problem required a complex solution)
  • untested code (no one writes tests for such beasts)
  • code that was patched and fixed numerous times (ie. riddled with ifs and todo comments)
  • a prime target for refactoring

Solution 5

A good code review is no substitute for a good static analysis tool, which is of course not substitute for a good set of unit tests, now unit tests are no good without a set of acceptance tests......

Code metrics are another tool to put into your tool box, they are not a solution in their own right they are just a tool to be used as appropriate (with of course all the other tools in your box!).

Share:
12,006
Steven A. Lowe
Author by

Steven A. Lowe

Steven was the founder and CEO of Innovator LLC (nov8r.com) for almost a decade, providing innovative solutions for complex problems, until accepting a Principal Consultant position with ThoughtWorks in 2014, then migrating to Product Technology Manager at Google in 2018. Currently TBD at TravelJoy.com Steven is a software architect, developer, innovator, entrepreneur, author, and inventor with several decades of experience in many different languages and platforms across dozens of industries. He also writes technical, marketing, science fiction, and IT-career articles, and writes and plays music. Current projects include: a real book, "Head-First Domain-Driven Design", for O'Reilly Media [2020] science fiction software research, "Deep Learning for Software Development" (talk) - from the SATURN 2017 conference an alleged book, Object Mechanics, which is intended to be a practical how-to guide for object-oriented programming. Read or download OOP Explained in 90 Seconds.

Updated on June 07, 2022

Comments

  • Steven A. Lowe
    Steven A. Lowe about 2 years

    I've seen a number of 'code metrics' related questions on SO lately, and have to wonder what the fascination is? Here are some recent examples:

    In my mind, no metric can substitute for a code review, though:

    • some metrics sometimes may indicate places that need to be reviewed, and
    • radical changes in metrics over short time frames may indicate places that need to be reviewed

    But I cannot think of a single metric that by itself always indicates 'good' or 'bad' code - there are always exceptions and reasons for things that the measurements cannot see.

    Is there some magical insight to be gained from code metrics that I've overlooked? Are lazy programmers/managers looking for excuses not to read code? Are people presented with giant legacy code bases and looking for a place to start? What's going on?

    Note: I have asked some of these questions on the specific threads both in answers and comments and got no replies, so I thought I should ask the community in general as perhaps I am missing something. It would be nice to run a metrics batch job and not actually have to read other people's code (or my own) ever again, I just don't think it is practical!

    EDIT: I am familiar with most if not all of the metrics being discussed, I just don't see the point of them in isolation or as arbitrary standards of quality.

  • Steven A. Lowe
    Steven A. Lowe over 15 years
    there is a difference, one has more lines of code. But so what? That says nothing about the quality of the software...
  • Steven A. Lowe
    Steven A. Lowe over 15 years
    i would agree, except that what you measure in order to improve is critical. If you want to reduce defects in a manufacturing process but all you measure is the number of defects and the amount of coffee consumed in the break room, you're probably not going to get anywhere
  • Steven A. Lowe
    Steven A. Lowe over 15 years
    in other words there are no established correlations to use as a standard; are you recommending the use of metrics and correlation to establish a standard? if so, can you demonstrate a causal linkage or are we again measuring coffee consumption?
  • Steven A. Lowe
    Steven A. Lowe over 15 years
    did you look at the project? what was the cause of the huge difference in the metric?
  • quamrana
    quamrana over 15 years
    I know which one I'd choose to work on next ;-)
  • Richard T
    Richard T over 15 years
    "Beautiful" code may well be easier to maintain - and if so, THAT has a lot of value!
  • John D. Cook
    John D. Cook over 15 years
    The project with 6K complexity started out poorly written, then got worse as it evolved under extreme pressure.
  • Alfred Myers
    Alfred Myers almost 15 years
    That would be true considering that the previous code was developed by yourself or another developer with equivalent or better skills. If on the other side the code was developed by a developer with fewer skills, an increasing number of lines in the diff (changed lines + new lines + lots of deleted lines) may actually mean an improvement in the code as you are getting rid of poor quality code.
  • Mike Dunlavey
    Mike Dunlavey almost 15 years
    @Alfred: Sure. I'm talking ideal-world, and averaged over a number of requirement changes. Here's an example of what I'm talking about, and it does have a learning curve: stackoverflow.com/questions/371898/…
  • J S
    J S over 14 years
    How do you know you did a good job if you have no baseline to compare it to?
  • Peter Perháč
    Peter Perháč over 14 years
    very nice, you have managed to express a very important idea in an elegant manner. I am currently doing research in the field of software metrics and I might reference this.
  • Shabbyrobe
    Shabbyrobe over 13 years
    +1 for "It's imortant to know when to stop and ask for a metric violation grant though." You're so right about this. It's destructive to adhere to metrics dogmatically.