In agile like development, who should write test cases?

20,973

Solution 1

The Team.

If a defect gets to a customer, it is the team's fault, therefore the team should be writing test cases to assure that defects don't reach the customer.

  1. The Project Manager (PM) should understand the domain better than anyone on the team. Their domain knowledge is vital to having test cases that make sense with regard to the domain. They will need to provide example inputs and answer questions about expectations on invalid inputs. They need to provide at least the 'happy path' test case.

  2. The Developer(s) will know the code. You suggest the developer may be best for the task, but that you are looking for black box test cases. Any tests that a developer comes up with are white box tests. That is the advantage of having developers create test cases – they know where the seams in the code are.

    Good developers will also be coming to the PM with questions "What should happen when...?" – each of these is a test case. If the answer is complex "If a then x, but if b then y, except on Thursdays" – there are multiple test cases.

  3. The Testers (QA) know how to test software. Testers are likely to come up with test cases that the PM and the developers would not think of – that is why you have testers.

Solution 2

I think the Project Manager, or Business Analyst should write those test cases.
They should then hand them over to the QA person to flesh out and test.

That way you ensure no missing gaps between the spec, and what's actually tested and delivered.

The developer's should definately not do it, as they'll be testing their unit tests. So it's a waste of time.

In addition these tests will find errors which the developer will never find as they are probably due to a misunderstanding in the spec, or a feature or route through the code not having been thought through and implemented correctly.

If you find you don't have enough time for this, hire someone else, or promote someone to this role, as it's key to delivering an excellent product.

Solution 3

We experimented with a pairing of the developer with a QA person with pretty good results. They generally 'kept each other honest' and since the developer had unit tests to handle the code, s/he was quite intimate with the changes already. The QA person wasn't but came at it from the black box side. Both were held accountable for completeness. Part of the ongoing review process helped to catch unit test shortcomings and so there weren't too many incidents that I was aware of where anyone was purposely avoiding writing X test because it would likely prove there was a problem.

I like the pairing idea in some instances and think it worked pretty well. Might not always work, but having those players from different areas interact helped to avoid the 'throw it over the wall' mentality that often happens.

Anyhow, hope that is somehow helpful to you.

Solution 4

From past experience, we had pretty good luck defining tests at different levels to test slightly different things:

1st tier: At the code/class level, developers should be writing atomic unit tests. The purpose is to test individual classes and methods as much as possible. These tests should be run by developers as they code, presumably before archiving code into source control, and by a continuous-integration server (automated) if one is being used.

2nd tier: At the component integration level, again have developers creating unit tests, but that test the integration between components. The purpose is not to test individual classes and components, but to test how they interact with each other. These tests should be run manually by an integration engineer, or automated by a continuous-integration seerver, if one is in use.

3rd tier: At the application level, have the QA team running their system tests. These test cases should be based off the business assumptions or requirements documents provided by a product manager. Basically, test as if you were an end user, doing the things end users should be able to do, as documented int eh requirements. These test cases should be written by the QA team and the product managers who (presumably) know what the customer wants and how they are expected to use the application.

I feel this provides a pretty good level of coverage. Of course, tiers 1 and 2 above should ideally be run before sending a built application to the QA team. Of course, you can adapt this to whatever fits your business model, but this worked pretty well at my last job. Our continous-integration server would kick out an email to the development team if one of the unit tests failed during the build/integration process too, incase someone forgot to run their tests and committed broken code into the source archive.

Solution 5

The reason I don't like having my QA people do it, is because I don't like the idea of them creating their own work. For example they might leave out things that are simply too much work to test, and they may not know the technical detail that is needed.

Yikes, you need to have more trust in your QA department, or a better one. I mean, imagine of you had said "I don't like having my developers develop software. I don't like the idea of them creating their own work."

As a developer, I Know that there are risks involved in writing my own tests. That's not to say I don't do that (I do, especially if I am doing TDD) but I have no illusions about test coverage. Developers are going to write tests that show that their code does what they think it does. Not too many are going to write tests that apply to the actual business case at hand.

Testing is a skill, and hopefully your QA department, or at least, the leaders in that department, are well versed in that skill.

Share:
20,973
Brian R. Bondy
Author by

Brian R. Bondy

About me: Founder and lead developer for Brave Software, working on the Brave browser. Formerly a Khan Academy software engineer Mozillian (Mozilla contributor) C++, JavaScript, C, C#, Python, programming for > 20 years Graduate of the University of Waterloo; Married with twins boys, one singleton, and a red tri-colored border collie Past Microsoft MVP for Visual C++ Contributor to various other open source projects like WikiMedia Links: Blog/Personal website; Twitter: @brianbondy; Here is a picture of my pet unicorn:

Updated on June 04, 2020

Comments

  • Brian R. Bondy
    Brian R. Bondy almost 4 years

    Our team has a task system where we post small incremental tasks assigned to each developer.

    Each task is developed in its own branch, and then each branch is tested before being merged to the trunk.

    My question is: Once the task is done, who should define the test cases that should be done on this task?

    Ideally I think the developer of the task himself is best suited for the job, but I have had a lot of resistance from developers who think it's a waste of their time, or that they simply don't like doing it.

    The reason I don't like having my QA people do it, is because I don't like the idea of them creating their own work. For example they might leave out things that are simply too much work to test, and they may not know the technical detail that is needed.

    But likewise, the down part of developers doing the test cases, is that they may leave out things that they think will break. (even subconsciously maybe)

    As the project manager, I ended up writing the test cases for each task myself, but my time is taxed and I want to change this.

    Suggestions?

    EDIT: By test cases I mean the description of the individual QA tasks that should be done to the branch before it should be merged to the trunk. (Black Box)

  • Brian R. Bondy
    Brian R. Bondy over 15 years
    The analogy with the developers is not right, the correct one would be that I don't like them defining their own spec. Which I don't allow them to do either. I do have faith in my QA department.
  • Chris McCauley
    Chris McCauley almost 14 years
    I think it's pretty unlikely that PMs "understand the domain better than anyone." In general PMs are not business specialists but expert in cost, schedule and risk management. More likely the PM would find someone who might be an expert. The BA is more likely to understand the domain.
  • mandersn
    mandersn almost 14 years
    I've been on three agile teams, and our PM and BA were generally the same person.
  • Roy Oliver
    Roy Oliver almost 10 years
    On the team, the PMs won't know the domain better than anyone else. Great if they do, but in general they do not. On a small project, I can see that being possible, but not for mid and large sizes.
  • sowrov
    sowrov about 3 years
    Testcases should come from the analysts who have access to the customers and are writing the requirements. Test cases are also part of the requirement.