What is a reasonable code coverage % for unit tests (and why)?

256,050

Solution 1

This prose by Alberto Savoia answers precisely that question (in a nicely entertaining manner at that!):

http://www.artima.com/forums/flat.jsp?forum=106&thread=204677

Testivus On Test Coverage

Early one morning, a programmer asked the great master:

“I am ready to write some unit tests. What code coverage should I aim for?”

The great master replied:

“Don’t worry about coverage, just write some good tests.”

The programmer smiled, bowed, and left.

...

Later that day, a second programmer asked the same question.

The great master pointed at a pot of boiling water and said:

“How many grains of rice should I put in that pot?”

The programmer, looking puzzled, replied:

“How can I possibly tell you? It depends on how many people you need to feed, how hungry they are, what other food you are serving, how much rice you have available, and so on.”

“Exactly,” said the great master.

The second programmer smiled, bowed, and left.

...

Toward the end of the day, a third programmer came and asked the same question about code coverage.

“Eighty percent and no less!” Replied the master in a stern voice, pounding his fist on the table.

The third programmer smiled, bowed, and left.

...

After this last reply, a young apprentice approached the great master:

“Great master, today I overheard you answer the same question about code coverage with three different answers. Why?”

The great master stood up from his chair:

“Come get some fresh tea with me and let’s talk about it.”

After they filled their cups with smoking hot green tea, the great master began to answer:

“The first programmer is new and just getting started with testing. Right now he has a lot of code and no tests. He has a long way to go; focusing on code coverage at this time would be depressing and quite useless. He’s better off just getting used to writing and running some tests. He can worry about coverage later.”

“The second programmer, on the other hand, is quite experience both at programming and testing. When I replied by asking her how many grains of rice I should put in a pot, I helped her realize that the amount of testing necessary depends on a number of factors, and she knows those factors better than I do – it’s her code after all. There is no single, simple, answer, and she’s smart enough to handle the truth and work with that.”

“I see,” said the young apprentice, “but if there is no single simple answer, then why did you answer the third programmer ‘Eighty percent and no less’?”

The great master laughed so hard and loud that his belly, evidence that he drank more than just green tea, flopped up and down.

“The third programmer wants only simple answers – even when there are no simple answers … and then does not follow them anyway.”

The young apprentice and the grizzled great master finished drinking their tea in contemplative silence.

Solution 2

Code Coverage is a misleading metric if 100% coverage is your goal (instead of 100% testing of all features).

  • You could get a 100% by hitting all the lines once. However you could still miss out testing a particular sequence (logical path) in which those lines are hit.
  • You could not get a 100% but still have tested all your 80%/freq used code-paths. Having tests that test every 'throw ExceptionTypeX' or similar defensive programming guard you've put in is a 'nice to have' not a 'must have'

So trust yourself or your developers to be thorough and cover every path through their code. Be pragmatic and don't chase the magical 100% coverage. If you TDD your code you should get a 90%+ coverage as a bonus. Use code-coverage to highlight chunks of code you have missed (shouldn't happen if you TDD though.. since you write code only to make a test pass. No code can exist without its partner test. )

Solution 3

Jon Limjap makes a good point - there is not a single number that is going to make sense as a standard for every project. There are projects that just don't need such a standard. Where the accepted answer falls short, in my opinion, is in describing how one might make that decision for a given project.

I will take a shot at doing so. I am not an expert in test engineering and would be happy to see a more informed answer.

When to set code coverage requirements

First, why would you want to impose such a standard in the first place? In general, when you want to introduce empirical confidence in your process. What do I mean by "empirical confidence"? Well, the real goal correctness. For most software, we can't possibly know this across all inputs, so we settle for saying that code is well-tested. This is more knowable, but is still a subjective standard: It will always be open to debate whether or not you have met it. Those debates are useful and should occur, but they also expose uncertainty.

Code coverage is an objective measurement: Once you see your coverage report, there is no ambiguity about whether standards have been met are useful. Does it prove correctness? Not at all, but it has a clear relationship to how well-tested the code is, which in turn is our best way to increase confidence in its correctness. Code coverage is a measurable approximation of immeasurable qualities we care about.

Some specific cases where having an empirical standard could add value:

  • To satisfy stakeholders. For many projects, there are various actors who have an interest in software quality who may not be involved in the day-to-day development of the software (managers, technical leads, etc.) Saying "we're going to write all the tests we really need" is not convincing: They either need to trust entirely, or verify with ongoing close oversight (assuming they even have the technical understanding to do so.) Providing measurable standards and explaining how they reasonably approximate actual goals is better.
  • To normalize team behavior. Stakeholders aside, if you are working on a team where multiple people are writing code and tests, there is room for ambiguity for what qualifies as "well-tested." Do all of your colleagues have the same idea of what level of testing is good enough? Probably not. How do you reconcile this? Find a metric you can all agree on and accept it as a reasonable approximation. This is especially (but not exclusively) useful in large teams, where leads may not have direct oversight over junior developers, for instance. Networks of trust matter as well, but without objective measurements, it is easy for group behavior to become inconsistent, even if everyone is acting in good faith.
  • To keep yourself honest. Even if you're the only developer and only stakeholder for your project, you might have certain qualities in mind for the software. Instead of making ongoing subjective assessments about how well-tested the software is (which takes work), you can use code coverage as a reasonable approximation, and let machines measure it for you.

Which metrics to use

Code coverage is not a single metric; there are several different ways of measuring coverage. Which one you might set a standard upon depends on what you're using that standard to satisfy.

I'll use two common metrics as examples of when you might use them to set standards:

  • Statement coverage: What percentage of statements have been executed during testing? Useful to get a sense of the physical coverage of your code: How much of the code that I have written have I actually tested?
    • This kind of coverage supports a weaker correctness argument, but is also easier to achieve. If you're just using code coverage to ensure that things get tested (and not as an indicator of test quality beyond that) then statement coverage is probably sufficient.
  • Branch coverage: When there is branching logic (e.g. an if), have both branches been evaluated? This gives a better sense of the logical coverage of your code: How many of the possible paths my code may take have I tested?
    • This kind of coverage is a much better indicator that a program has been tested across a comprehensive set of inputs. If you're using code coverage as your best empirical approximation for confidence in correctness, you should set standards based on branch coverage or similar.

There are many other metrics (line coverage is similar to statement coverage, but yields different numeric results for multi-line statements, for instance; conditional coverage and path coverage is similar to branch coverage, but reflect a more detailed view of the possible permutations of program execution you might encounter.)

What percentage to require

Finally, back to the original question: If you set code coverage standards, what should that number be?

Hopefully it's clear at this point that we're talking about an approximation to begin with, so any number we pick is going to be inherently approximate.

Some numbers that one might choose:

  • 100%. You might choose this because you want to be sure everything is tested. This doesn't give you any insight into test quality, but does tell you that some test of some quality has touched every statement (or branch, etc.) Again, this comes back to degree of confidence: If your coverage is below 100%, you know some subset of your code is untested.
    • Some might argue that this is silly, and you should only test the parts of your code that are really important. I would argue that you should also only maintain the parts of your code that are really important. Code coverage can be improved by removing untested code, too.
  • 99% (or 95%, other numbers in the high nineties.) Appropriate in cases where you want to convey a level of confidence similar to 100%, but leave yourself some margin to not worry about the occasional hard-to-test corner of code.
  • 80%. I've seen this number in use a few times, and don't entirely know where it originates. I think it might be a weird misappropriation of the 80-20 rule; generally, the intent here is to show that most of your code is tested. (Yes, 51% would also be "most", but 80% is more reflective of what most people mean by most.) This is appropriate for middle-ground cases where "well-tested" is not a high priority (you don't want to waste effort on low-value tests), but is enough of a priority that you'd still like to have some standard in place.

I haven't seen numbers below 80% in practice, and have a hard time imagining a case where one would set them. The role of these standards is to increase confidence in correctness, and numbers below 80% aren't particularly confidence-inspiring. (Yes, this is subjective, but again, the idea is to make the subjective choice once when you set the standard, and then use an objective measurement going forward.)

Other notes

The above assumes that correctness is the goal. Code coverage is just information; it may be relevant to other goals. For instance, if you're concerned about maintainability, you probably care about loose coupling, which can be demonstrated by testability, which in turn can be measured (in certain fashions) by code coverage. So your code coverage standard provides an empirical basis for approximating the quality of "maintainability" as well.

Solution 4

Code coverage is great, but functionality coverage is even better. I don't believe in covering every single line I write. But I do believe in writing 100% test coverage of all the functionality I want to provide (even for the extra cool features I came with myself and which were not discussed during the meetings).

I don't care if I would have code which is not covered in tests, but I would care if I would refactor my code and end up having a different behaviour. Therefore, 100% functionality coverage is my only target.

Solution 5

My favorite code coverage is 100% with an asterisk. The asterisk comes because I prefer to use tools that allow me to mark certain lines as lines that "don't count". If I have covered 100% of the lines which "count", I am done.

The underlying process is:

  1. I write my tests to exercise all the functionality and edge cases I can think of (usually working from the documentation).
  2. I run the code coverage tools
  3. I examine any lines or paths not covered and any that I consider not important or unreachable (due to defensive programming) I mark as not counting
  4. I write new tests to cover the missing lines and improve the documentation if those edge cases are not mentioned.

This way if I and my collaborators add new code or change the tests in the future, there is a bright line to tell us if we missed something important - the coverage dropped below 100%. However, it also provides the flexibility to deal with different testing priorities.

Share:
256,050
sanity
Author by

sanity

I'm an entrepreneur and computer scientist, with a particular interest in Artificial Intelligence and Peer-to-Peer. My two most notable projects are Freenet and Revver (I'm founder and co-founder respectively). My current projects are a predictive analytics system called SenseArray, and a new approach to distributed computation called Swarm. You can find my personal blog here. While I've used C, C++, ML, Haskell, Prolog, Python, even Perl in the past, these days I do most of my programming in Java. I am gaining experience with Scala though and expect to become my primary language as it and its tools mature. I was honored to be asked by Scala's creator to be on the program committee for the first Scala workshop.

Updated on July 01, 2021

Comments

  • sanity
    sanity almost 3 years

    If you were to mandate a minimum percentage code-coverage for unit tests, perhaps even as a requirement for committing to a repository, what would it be?

    Please explain how you arrived at your answer (since if all you did was pick a number, then I could have done that all by myself ;)

  • sanity
    sanity almost 16 years
    How did you arrive at that percentage?
  • stephbu
    stephbu almost 16 years
    As a footnote - this can be messy for projects where automation is difficult - as always be pragmatic about what is achievable vs. desireable.
  • Charles Graham
    Charles Graham almost 16 years
    You should probably use Model View Presenter for you UI if you are in a TDD enviernment.
  • stephbu
    stephbu almost 16 years
    Mainly through experimentation. It is pretty easy to get to code coverage to 80-90% for Dev-related unit tests - going higher normally needs divine test intervention - or really simple code paths.
  • sanity
    sanity almost 16 years
    Sounds like an argument against the general concept of code coverage, as a metric for evaluating the usefulness of unit tests. I'm sure everyone agrees it isn't a perfect metric, but personal experience should hopefully show some correlation between CC % and unit test effectiveness...
  • sanity
    sanity almost 16 years
    Don't want a rule, just feedback on any personal experience on the correlation between code-coverage percentage and unit test effectiveness.
  • Jon Limjap
    Jon Limjap almost 16 years
    sanity -- your statement is mirrored precisely by the response to the "second developer". Personal experience should dictate it.
  • sanity
    sanity almost 16 years
    I see, so you think that there is no valuable information that can be shared other than "make a decision based on your personal experience"?
  • sanity
    sanity almost 16 years
    Obviously there has to be common sense in there. Its not much use if the 50% of the code you are testing are comments.
  • Jon Limjap
    Jon Limjap almost 16 years
    It's more in the lines of... is your coverage spent on your application's core functionality, or is it uselessly testing trivial features/nice-to-haves?
  • Rob Cooper
    Rob Cooper over 15 years
    Perfect answer. Metrics do not make good code. You can write crappy code with 100% coverage and it doesn't make the code work good. +1 from me, shame I can't up more :)
  • tddmonkey
    tddmonkey over 15 years
    - Exceptions- if you don't test your exception handling how do you know your code doesn't blow up when that happens? - Setters/Getters - context sensitive I suppose, but surely your tests should execute them as part of the test suite, and if they don't are they actually being used?
  • Gishu
    Gishu over 15 years
    Exceptions should be exceptional - not supposed to happen. If they do, you log the point of failure and bail. You can't test every exception that could happen. If the app is supposed to handle a non-happy path/event, you should have a test for it. Accessors may be added for future clients.. depends
  • Gishu
    Gishu about 15 years
    I take back my words: Exceptions need not be exceptional - a method may throw an exception to indicate that it could not carry out its work to completion. What I meant was you don't need a test for every method where you've put a defensive just-in-case try-catch. All known failure modes of a method must be tested with automated UTs
  • Asim Ihsan
    Asim Ihsan about 13 years
    I'm not sure what you mean by your second point "but still have tested all your code-paths". If you in fact mean full-path coverage, then no you cannot have full-path coverage without 100% line/branch/decision coverage. In fact, full-path coverage is usually unobtainable in any non-trivial program because of the combinatoric nature of branches in generating paths. en.wikipedia.org/wiki/Code_coverage#Other_coverage_criteria
  • Gishu
    Gishu about 13 years
    @Zach - my mistake. I should have said "all code paths that you want to work" or "non-fringe paths". If the path failure is a very rare possibility I would be okay with it not being covered.
  • Dawood ibn Kareem
    Dawood ibn Kareem about 12 years
    You don't test every possible exception; of course you can't do that. You SHOULD aim to test every block of code that handles exceptions. For example, if you have a requirement that when block X throws an exception, the exception is logged in the database, the green stripe at the bottom of the screen turns red, and an email is sent to the Pope; then that is what you should test. But you don't have to test every possible exception that might trigger these events.
  • Dawood ibn Kareem
    Dawood ibn Kareem about 12 years
    I disagree completely. A unit test is only worth something if there's a chance that it will uncover a bug, (either a bug that exists now or a regression bug in the future); or if it helps to document the behaviour of your class. If a method is so simple that it can't really fail, such as a one-line getter, then there is zero value in providing a unit test for it.
  • Dawood ibn Kareem
    Dawood ibn Kareem about 12 years
    This is a fantastic answer. Code that meets its requirements is a far more worthwhile goal than code that meets some arbitrary LoC coverage metric.
  • brickner
    brickner about 12 years
    I had bugs in one line getters. From my experience, there's no bug free code. There's no method that can't really fail.
  • beluchin
    beluchin almost 12 years
    +1 for "Use code-coverage to highlight chunks of code you have missed". That's basically what that metric is good for.
  • SickHippie
    SickHippie over 11 years
    4 years later, and still useful. Just pulled this on two of my colleagues this morning.
  • Jens Timmerman
    Jens Timmerman over 11 years
    If you can provide all functionality without hitting all the lines of code, then what are those extra lines of code doing there?
  • tofi9
    tofi9 over 11 years
    @JensTimmerman theoretically you're right. However, 100% code coverage is too expensive time-wise, and forcing my team to do that not only demotivates them, but also makes my project run over the deadline. I like to be somewhere in the middle, and testing functionality (call it: integration testing) is what I feel comfortable with. What code I don't test? Technical exception handling, (range/parameter) checks that could be needed. In short, all technical plumbing that I learned to apply from own experience or best practices I read about.
  • avandeursen
    avandeursen over 11 years
    Savoia later reposted this at googletesting.blogspot.nl/2010/07/…
  • Lowell Montgomery
    Lowell Montgomery about 11 years
    Assuming your one-line getter is used by other code that you do cover, and the tests of that code pass, then you've also indirectly covered the one-line getter. If your aren't using the getter, what's it doing in your code? I agree with David Wallace… there is no need to directly test simple helper functions that are used elsewhere if the code and tests which depend on the helper don't show there might be a problem with it.
  • stephbu
    stephbu over 10 years
    I start usually with 1) major runtime code paths 2) obvious exception cases that I explicitly throw 3) conditional cases that terminate with "failure" This gets you usually into the 70-80 range Then wackamole, bugs and regressions for corner cases, parameter fuzzing etc. Refactoring to enable injection of methods etc. I generally allow at least as much time for writing/refactoring dev-related tests as the main code itself.
  • samspot
    samspot about 10 years
    To me this anecdote represents an idealistic view. In the real world of project teams with competing priorities, code coverage races to 0%. We need a required number in order to build the unit testing habit within the team. I came to this question looking for some guidance on determining that number for an area I am not very familiar with, and this is really no help at all. I'm glad people in other scenarios are finding it useful though.
  • Admin
    Admin over 9 years
    it's 2015 and metrics do make good code. they're indicators of good code and while you can write some junk that works it will turn software development into a cost center at some point and you'll get bogged down in maintenance. So yes, metrics don't make good code but they do indicate. Even the infamous lines of code indicates how much your team can output maximum per month.
  • tobijdc
    tobijdc about 9 years
    code coverage is only one metric and it depends a lot on your project. I would look more closely on the coverage of your business logic and do additional mutation testing (for Java with PITest) as a second metric. Try to test as much as you can and use Code Quality tools (for Java Findbugs, PMD, ErrorProne) so you can find existing potential problems. This should lead you in the right direction.
  • dance2die
    dance2die over 8 years
    I ran a test coverage on my code for the "first" time in my life and found out that my tests do not cover all. I was so ready to test "set" functionality of my property. I now know that I don't need to.
  • speedplane
    speedplane over 8 years
    Instead of following an absolute code coverage percentage, many track the change over time. Suppose you have 20% coverage from tests. While that number may or may not be satisfactory at the time, if the number changes downward,it can provoke questions about code quality.
  • killscreen
    killscreen over 8 years
    A young student came to Pythagoras. "I have a right triangle," he asked, "but how may I know the length of its longest side?" Pythagoras, who was very wise and cool, merely laughed. "This I cannot tell you," he proclaimed, "for it depends on the triangle!"
  • Jon Limjap
    Jon Limjap over 8 years
    @killscreen You do know there's a difference between mathematical axioms and things that are answerable by "it depends", right? :p
  • killscreen
    killscreen over 8 years
    @JonLimjap - There are differences and there are commonalities. One commonality is that both problems have dependencies. It is correct in both cases to answer "it depends." It is as correct and more complete, in both cases, to answer with an explanation of what those dependencies are and how they influence the result.
  • Erik Aronesty
    Erik Aronesty over 8 years
    @sanity it's an argument that relative code coverage (it went up this week, it went down... why???) ... makes sense as a metric of where you are now vs where you were before. i'm working on a project that contains a LOT of awful, untestable auto-generated boiler plate code.
  • Erik Aronesty
    Erik Aronesty over 8 years
    sounds like a large % of your code is either boilerplate, or exception handling, or conditional "debug mode" stuff
  • Chris Stephens
    Chris Stephens over 8 years
    CC% works pretty well with code reviews. Don't let junk test in. As a reviewer you should probably look at the test first to see what the code is supposed to be doing. From there you can judge the code. If I had to say I'd say this about CC% (number directly out of my bum) 30%, probably not good. 50% means you are probably catching some stuff, you're making an effort. 75% much better this might be a good target. 90% very cool, probably some waist. 100% you may be cheating or your coverage plugin is broken.
  • curlyreggie
    curlyreggie over 8 years
    Good answer. Can you help me in finding functionality coverage via unit tests? Any tool(s) that can help me achieve this?
  • Ali Motevallian
    Ali Motevallian over 8 years
    I really challenge the person who wrote this that why it is valid to write some code that cannot be tested, whether it is DTO or anything else.
  • Gishu
    Gishu about 8 years
    Coming back to this question after 7 years 0- what my comment left in the gaps was this. If you throw a custom exception, of course you should test that - it is part of your contract with your client. But you don't write tests for out of the blue Null Reference Exception, OutOfMemoryException etc..- they could happen n you may put in generic catch handlers for these (to log the failure for troubleshooting) but I don't generally write tests for these. I'm okay with it being non covered as long it's a simple/small generic catch-log-rethrow handler
  • datashaman
    datashaman about 8 years
    'Smoking hot green tea' would be (arguably) undrinkable and taste very bitter. Temperature is crucial. thefragrantleaf.com/green-tea-brewing-tips
  • fips
    fips about 8 years
    It may be the case that good coverage doesn't guarantee good code or tests but some people use this fact as an excuse to get away with low coverage which indicates lack of tests, and that is clearly worse than a few tests. So why not aim for both, good coverage and good tests? E.g. based on expected features: ronjeffries.com/xprog/articles/jatrtsmetric
  • Agoston Horvath
    Agoston Horvath almost 8 years
    Code coverage means nothing. Without proper assertions after test run, or proper setup of test environment, it is useless. Besides, you don't test lines of code -- you test functionality. As such, tests should not be tied to the implementation but check the desired functionality.
  • Daniel
    Daniel almost 8 years
    @LowellMontgomery and what if the test for your other code fails because of that one-line getter (that was not tested)? If there was a test in place for the one-liner, it would be much easier to get to the cause of the fail. It gets really bad when you have hundreds of not tested one-liners being used in several different places.
  • Bamieh
    Bamieh almost 8 years
    moral of the story: 80% !
  • user3078523
    user3078523 almost 8 years
    @killscreen I think the answer to your doubts is there in the text. If you're just getting started with testing, then focusing on code coverage would make you more harm than good. If you are more experienced, then measuring code coverage will give you some useful information about general health of your system and how it changes over time, but it will not give you strict recommendations (it's your code after all and you should know better how many tests are enough). Personally, I like it this way, it shows that Programming is also a craft and a sort of art, not only following recipies.
  • Lowell Montgomery
    Lowell Montgomery over 7 years
    The assumption was the tests using the one-line getter passed. If it failed (e.g. where you try to use the return value of your one-line getter), then you can sort it out. But unless there is a really pressing reason for being so paranoid, you have to draw the line somewhere. My experience has been that I need to prioritize what sucks my time and attention and really simple "getters" (that work) don't need separate tests. That time can be spent on making other tests better or more full coverage of code that is more likely to fail. (i.e. I stand by my original position, with David Wallace).
  • dtortola
    dtortola over 7 years
    In my experience, Code Coverage works once to test that what I write does what it should, but many to ensure that what I wrote still does what it should after someone (me included) changed the code due to new requisites. And to those who think this "proves" they can have low coverage, the lower the coverage, the fewer your tests, the fewer your tests, the fewer your good tests. You can still have crappy code with 100% coverage, but you can have much crappier code when allowed less coverage
  • Dorian
    Dorian over 7 years
    Just the concept of having huge chunks of code that are not covered by the specs if pretty crazy! (e.g.: DTO)
  • Warren Dew
    Warren Dew almost 7 years
    @grasevski That's because person 3 makes him feel inadequate, so he overcompensates.
  • domdambrogia
    domdambrogia almost 7 years
    Would you care to include the "tools that allow [you] to mark certain lines as lines that don't count"?
  • mhasan
    mhasan almost 7 years
    Your short story helped me aligning JUnit thoughts in my team. Thanks a lot!
  • 0x1mason
    0x1mason over 6 years
    Great answer. It's the only one that focuses on testing as a team problem in an industrial setting. I don't get to review everything and my team is very bright, but green. I set a percentage floor of 90% on a new project as a sanity check for junior devs, not because I believe it is "enough". "90%" and "positive, negative, and null" are easy mantras for bright, young developers who I know will do a good job, but don't have the experience to go ahead and write that extra test case that's nagging at the back of your mind.
  • Pavel Lechev
    Pavel Lechev over 6 years
    The coverage is just that - indication that a given line of code was executed by the runtime. It is not an indication that that particular line of code was de-facto tested. For this one needs solid understanding and application of the behavioural assertions which will prove that the code does what it was meant to do.
  • Pavel Lechev
    Pavel Lechev over 6 years
    100% coverage is easily achieved, but with poor quality assertions the tests are utterly useless as a proofing tool. Religious worship of "high code coverage" can in fact be dangerous as it leads to false sense of security when there may not be any at all. "Write some good tests" is probably the best philosophy one can adhere to - be it a novice or 30-year code veteran.
  • Skeeterdrums
    Skeeterdrums over 6 years
    I took this a step further by making a list of common situations that should be either included or excluded from testing. That way, we were never driving towards a percent, but rather functional coverage of all parts of the working codebase.
  • yeoman
    yeoman over 6 years
    I find it hilarious that people react to this post with a discussion about how many tests one should write - that perfectly breaks the fourth wall 😂
  • victorhazbun
    victorhazbun about 6 years
    Metrics won't make your app solid, great tests will do. Metrics in software development are just a guide for you to know where you can improve and more important: You should not relay on them to determine if it's quality application or not.
  • bugkiller
    bugkiller almost 6 years
    i think this is the best answer available.
  • bishop
    bishop over 5 years
    @domdambrogia As an example in PHP, if using Bergmann's code coverage library, annotate a line with // @codeCoverageIgnore and it'll be excluded from coverage.
  • windmaomao
    windmaomao over 5 years
    Second thought, though all truth here, isn't this the answer to any question? We can't approach questions in this fashion :)
  • ConsistentProgrammer
    ConsistentProgrammer about 5 years
    10 years later, the advise of the old master still holds :)
  • Developerium
    Developerium over 4 years
    Still a good advice in 2020!
  • Jankapunkt
    Jankapunkt over 3 years
    From a DDD perspective the high(est) aim for business logic is very reasonable. Detecting the slightest change in business logic behavior is crucial.
  • CommonSenseCode
    CommonSenseCode over 3 years
    I believe the 80% number comes from Martin Fowlers article on the subject martinfowler.com/bliki/TestCoverage.html
  • Tim Rutter
    Tim Rutter about 2 years
    its an argument against metrics in general. The metrics influence the behaviour and so do not deliver what is really needed: not high quality code, just well covered code
  • Tim Rutter
    Tim Rutter about 2 years
    it needs to be a loose goal - 10% coverage shows you've got very hight change of problems but trying to get to 100% is going to be a lot of work with diminishing gains. The magical figure depends on the team, the software and numerous other factors.