Do you find cyclomatic complexity a useful measure?

10,164

Solution 1

We refactor mercilessly, and use Cyclomatic complexity as one of the metrics that gets code on our 'hit list'. 1-6 we don't flag for complexity (although it could get questioned for other reasons), 7-9 is questionable, and any method over 10 is assumed to be bad unless proven otherwise.

The worst we've seen was 87 from a monstrous if-else-if chain in some legacy code we had to take over.

Solution 2

Actually, cyclomatic complexity can be put to use beyond just method level thresholds. For starters, one big method with high complexity may be broken into several small methods with lower complexity. But has it really improved the codebase? Granted, you may get somewhat better readability by all those method names. But the total conditional logic hasn't changed. And the total conditional logic can often be reduced by replacing conditionals with polymorphism.

We need a metric that doesn't turn green by mere method decomposition. I call this CC100.

CC100 = 100 * (Total cyclomatic complexity of codebase) / (Total lines of code)

Solution 3

It's useful to me in the same way that big-O is useful: I know what it is, and can use it to get a gut feeling for whether a method is good or bad, but I don't need to compute it for every function I've written.

I think simpler metrics, like LOC, are at least as good in most cases. If a function doesn't fit on one screen, it almost doesn't matter how simple it is. If a function takes 20 parameters and makes 40 local variables, it doesn't matter if its cyclomatic complexity is 1.

Solution 4

Until there is a tool that can work well with C++ templates, and meta-programming techniques, it's not much help in my situation. Anyways just remember that

"not all things that count can be measured, and not all things that can be measured count" Einstein

So remember to pass any information of this type through human filtering too.

Solution 5

We recently started to use it. We use NDepend to do some static code analysis, and it measures cyclomatic complexity. I agree, it's a decent way to identify methods for refactoring.

Sadly, we have seen #'s above 200 for some methods created by our developers offshore.

Share:
10,164
Bushuev
Author by

Bushuev

Updated on June 17, 2022

Comments

  • Bushuev
    Bushuev about 2 years

    I've been playing around with measuring the cyclomatic complexity of a big code base.

    Cyclomatic complexity is the number of linearly independent paths through a program's source code and there are lots of free tools for your language of choice.

    The results are interesting but not surprising. That is, the parts I know to be the hairiest were in fact the most complex (with a rating of > 50). But what I am finding useful is that a concrete "badness" number is assigned to each method as something I can point to when deciding where to start refactoring.

    Do you use cyclomatic complexity? What's the most complex bit of code you found?

  • wowest
    wowest about 15 years
    I've seen some of those huge methods you're talking about, and it's usually about putting out fires. Once a fire is out, there's no reason to refactor (it works damnit!) and now that chunk of code is that much bigger, and has another fire in a few weeks/months.
  • chaos
    chaos about 15 years
    Heh. Somebody knows the story.
  • Michael Stum
    Michael Stum about 15 years
    87? That's a very thorough implementation of the Arrow Anti-Pattern... Sincere condolences.
  • Prasanth Kumar
    Prasanth Kumar about 15 years
    In an earlier life, I remember having seen more than 300.
  • reinierpost
    reinierpost almost 14 years
    A colleague of mine has encountered cases of over a 1000.
  • reinierpost
    reinierpost almost 14 years
    But testability has improved: separate methods can (in principle) be tested separately, even if the logic doesn't change. Of course this doesn't hold if the methods also depend on a lot of global state, but that's a problem in its own right.
  • stakx - no longer contributing
    stakx - no longer contributing almost 14 years
    +1 for the hyperlink to an interesting slide show. I recently spent some thoughts on exactly this issue and am happy to find more material on it.
  • Allen Rice
    Allen Rice about 13 years
    ITS OVER 9000!!!!!! .... Sorry, couldn't help myself. Anything over 200 would be mind boggling
  • LAFK says Reinstate Monica
    LAFK says Reinstate Monica over 11 years
    I'd say all these parameters and local variables are for logic flow. Thus, they are for CC. Just out of my head thinking.
  • LAFK says Reinstate Monica
    LAFK says Reinstate Monica over 11 years
    There's also a very interesting thing: often changed code with high complexity is the bug breeding ground. So, counting complexity automatically can be a good thing.
  • Calmarius
    Calmarius about 10 years
    So basically a highly sequential function containing 10 if statements in a row would fail the test?
  • Terrance
    Terrance over 9 years
    I just dug into CC tonight as I was trying to provide a valid plan of attack for code cleanup of a project. Worst offenders were 450 for a single method and 1,289 for a class (And no i didn't write any of it). Good game all. SIGH............
  • Wolf
    Wolf over 9 years
    replacing conditionals with polymorphism may reduce cyclomatic complexity, but it also decreases its local comprehensibility.
  • ottodidakt
    ottodidakt over 9 years
    @Wolf OO-code is meant to be comprehended more by its interface (encapsulation) than by implementation - at least at the point of usage (method calls).
  • Wolf
    Wolf over 9 years
    @ottodidakt yes, seems I didn't really get your point - now it seems that you criticise the usage of classical CC metrics, stating that CC100 would help to detect over-complicated code?
  • Wolf
    Wolf over 9 years
    @reinierpost testability depends also on the quality of decomposition
  • crh225
    crh225 about 9 years
    Just joined a comapny, and found one windows form has 1518