Help me understand how QA works in Scrum

39,356

Solution 1

My opinion is that you have an estimation problem. It seems that the time to test each feature is missing, and only the building part is being considered when planning the sprint.

I'm not saying it is an easy problem to solve, because it is more common than anything. But things that could help are:

  • Consider QA as members of the dev team, and include them in the sprint planning and estimating more closely.

  • 'Releasable Dev tasks' should not take up most of the sprint. Complete working features should. Try to gather metrics about dev time vs QA time for each kind of task and use those metrics when estimating future sprints.

  • You might need to review your backlog to see if you have very coarse grained features. Try to divide them in smaller tasks that could be easily estimated and tested.

In summary, it seems that your team hasn't found what its real velocity is because there are tasks that are not being considered when doing the estimation and planning for the sprint.

But in the end, estimation inaccuracy is a tough project management issue that you find in agile-based or waterfall-based projects. Good luck.

Solution 2

A little late to the party here but here's my take based on what you wrote.

Now, Scrum is a project management methodology, not a development one. But it is key, in my opinion, to have development process in place. Without one, you spend the majority of your time reacting rather than building.

I'm a test-first guy. In my development process I build tests first to enforce the requirements and the design decisions. How is your team enforcing those? The point I'm trying to make here is that you simply can't "throw stuff over the fence" and expect anything but failure to occur. That failure is either going to be by the test team (by not testing very well and thus letting problems slip by) or by the developers (by not building the product that solves the problem). I'm not saying you must write tests first - I'm not a militant or a test-first evangelist - but I'm saying you must have a process in place to produce quality, tested, ready-for-production code when you reach an iteration's end.

I've been right where you are in this development methodology that I call the Death Spiral Method. I built software for the government (US) for years in such a model. It doesn't work well, it costs a LOT of money, it produces late code, poor code, and does nothing for morale. You can't make any headway when you spend all your time fixing bugs you could have avoided making in the first place. I was absolutely beaten down by the affair.

You don't want QA finding your problems. You want to put them out of work, really. My goal is to make QA flabbergasted because everything just works. Granted, that is a goal. In practice, they'll find stuff. I'm not super-human. I make mistakes.

Back to scheduling...

At my current job we do Scrum, we just don't call it that. We aren't into labels here but we are into producing quality code on time. Everyone is on-board. We tell QA what we'll have ready to test and when. If they come a-knocking two weeks early for it, they can talk to the hand. Everyone knows the schedule, everyone knows what will be in the release and everyone knows that the product has to work as advertised before it goes to QA. So what does that mean? You tell QA "don't bother testing XYZ - it is broken and won't be fixed until release C" and if they go testing that, you point them back at that statement and tell them not to waste your time. Harsh, perhaps, but sometimes necessary. I'm not about being rude, but everyone needs to know "the rules" and what should be tested and what is a 'known issue'.

Your management has to be on board. If they aren't you are going to have troubles. QA can't run the show and the dev group can't completely run it either. All the groups (even if those groups are just one person per group or a guy that wears several hats) need to be on the same page: the customer, the test team, the developers, management, and anyone else. More than half the battle is communication, typically.

Perhaps you are biting off more than can be accomplished during a sprint. That might be the case. Why are you doing that? To meet a schedule? If so, that is where management needs to step in and resolve the issue. If you are giving QA buggy code, expect them to toss it back. Better to give them 3 things that work than 8 things that are unfinished. The goal is to produce some set of functionality that is completely implemented on each iteration, not to throw together a bunch of half-done stuff.

I hope this is received as it is intended to be - as an encouragement not a rant. Like I mentioned, I've been where you are and it isn't fun. But there is hope. You can get things turned around in a sprint, maybe two. Perhaps you don't add any new functionality in the next sprint and simply fix what is broken. You'll have to decide that as a team.

One more small plug for writing test code: I've found myself far more relaxed and far more confident in my product since adopting a 'write the tests first' approach. When all my tests pass, I have a level of confidence that I simply couldn't have without them.

Best of luck!

Solution 3

Hopefully, you fix this by tackling fewer dev tasks in each sprint. Which leads to the questions: Who's settings dev's goals? Why is Dev falling short of those goals consistently?

If dev isn't setting their own goals, that's why they're always late. And that isn't the ideal way to practice Scrum. That's just incremental development with big, deadline-driven deliverables and no actual stake-holder responsibility on the part of developers.

If dev can't set their own goals because they don't know enough, then they have to be more involved up front.

Scrum depends on four basic principles, outlined in the Agile Manifesto.

  1. Interactions matter -- that means dev, QA, project management, and end users need to talk more and talk with each other. Software is a process of encoding knowledge in the arcane language of computers. To encode the knowledge, the developers must have the knowledge. [Why do you think we call it "code"?] Scrum is not a "write spec - throw over transom" methodology. It's ANTI-"write spec - throw over transom"

  2. Working Software matters -- that means that each piece dev bites off has to lead to a working release. Not a set of bug fixes for QA to wrestle with, but working software.

  3. Customer Collaboration -- that means dev has to work with business analysts, end users, business owners, everyone who can help them understand what they're building. The deadlines don't matter as much as the next thing handed over to the customer. If the customer needs X, that's the highest priority thing for everyone to do. If the project plan says build Y, that's a load of malarkey.

  4. Responding to Change -- that means that customers can rearrange the priorities of the following sprints. They can't rearrange the sprint in process (that's crazy) but all the following sprints are candidates for changing priorities.

If the customer drives, then the deadlines become less artificial "project milestones" and more "we need X first, then Y, and this thing in section Z, we don't need that any more. Now that we have W, Z is redundant."

Solution 4

The Scrum rules say that all Sprint items need to be "fully tested, potentially implementable features" at the end of the Sprint to be considered complete. Sprints ALWAYS end on time, and the Team doesn't get credit and isn't allowed to present anything at the Sprint review that isn't complete - and that includes QA.

Technically, that's all you should need. A Team commits to a certain amount of work, finally gets it to QA two days before the end of the Sprint and the QA isn't done in time. So the output from the Sprint is zero, they have to go in front of the Customer and admit that they have nothing to show for a month of work.

Next time round, you'll bet that they'll pick less work and figure out how to get it to QA so that it can be finished on time.

Solution 5

Speaking as a QA who has worked on Agile projects for 2.5 years this is a really difficult issue and I still don't have all the answers.

I work as part of a "triplet" (two developers who pair program + one QA) and I am involved in tasking out stories and estimating in planning meetings at the beginning of two week iterations. As adrianh mentioned above it is essential for QAs to get their voice heard in the initial sprint planning. This can be difficult especially if you are working with Developers with very strong personalities however QAs must be assertive in the true sense of the word (i.e. not aggressive or forceful but respectfully seeking to understand the Truth/PO and Developers/technical experts whilst making themselves understood). I advocate producing QA tasks first during planning to encourage a test driven mentality - the QA may have to literally put themselves forward to get this adopted. It is opposite to how many people think software development works but pays dividends for several reasons;

  1. QA is heard and not relegated to being asked "so how are you going to test that?" after Devs have said their piece (waterfall mentality).

  2. It allows QA to propose ideas for testing which at the same time checks the testability of the acceptance criteria while the Truth/PO is present (I did say it is essential for them to be present in the planning meeting didn't I?!) to fill in any gaps in understanding.

  3. It provides the basis for a test driven approach - after the test approach has been enunciated and tasked the Devs can think about how they will produce code to pass those tests.

  4. If steps 1 - 3 are your only TDD activity for the rest of the iteration you are still doing a million times better than the scenario postulated by Steve in the first post; "Developers thrash around trying to accomplish their tasks. Generally the tasks take most of the sprint to complete. QA pesters Dev to release something they can test, Dev finally throws some buggy code out to QA a day or two before the sprint ends and spends the rest of the time fixing bugs that QA is finding"

Needless to say this comes with some caveats for the QA;

  1. They must be prepared to have their ideas for testing challenged by Devs and Truth/PO and to reach a compromise; the "QA police" attitude won't wash in an Agile team.

  2. QA tasks must strike a difficult balance to be neither too detailed nor too generic (tasks can be written on a card to go on a "radiator board" and discussed at daily stand up meetings - they need to be moved from "in progress" to "completed" DURING the iteration).

  3. QAs need to prepare for planning/estimation meetings. Don't expect to be able to just turn up and produce a test approach off the top of your head for unseen user stories! Devs do seem to be able to do this because their tasks are often far more clear cut - e.g. "change x module to interface with z component" or "refactor y method". As a QA you need to be familiar with the functionality being introduced/changed BEFORE planning so that you know the scope of testing and what test design techniques you might apply.

  4. It is almost essential to automate your tests and have these written and "failing" within the first two or three days of an iteration or at least to co-incide with when the Devs have the code ready. You can then run the test/s and see if they pass as expected (proper QA TDD). This is how you avoid a mini waterfall at the end of iterations. You should really demo the test to the Devs before or as they start coding so they know what to aim for.

  5. I say 4 is "almost essential" because the same can sometimes be successfully achieved with manual checklists (dare I say scripts!) of expected behaviour - the key is to share this with Devs ahead of time; keep talking to them!

With regards to point 2 above on the subject of the tasks, I have tried creating tasks as granular as 1/2 hour to 2 hours in size each corresponding to a demonstrable piece of work e.g. "Add checks for incorrect password to auto test - 2 hrs". While this helps me organise my work it has been criticised by other team members for being too detailed and has the effect at stand ups of me either moving multiple tasks across to complete from the day before or not being able to move any tasks at all because I have not got onto them yet. People really want to see a sense of steady progress at daily stand ups so it is more helpful to create tasks in 1/2 day or 1 day blocks (but you might keep your own list of "micro-tasks" to do towards to completion of the bigger tasks that you use for COMMUNICATING overall progress at the stand-up).

With regards to points 4 and 5 above; the automated tests or manual checklists you prepare early should really cover just the happy paths or key acceptance criteria. Once these pass you can have planned an additional task for a final round of "Exploratory testing" towards the end of the iteration to check the edge cases. What the Devs do during that time is problematic because as far as they are concerned they are "code complete" unless and until you find a bug. Some Agile practitioners advocate going for the edge cases first although this can also be problematic because if you run out of time you may not have assured that the acceptance criteria have been delivered. This is one of those finely balanced decisions that depends on the context of the user story and your experience as a QA!

As I said at the beginning I still don't have all the answers but hope the above provide some pointers born out of hard experience!

Share:
39,356
Steve
Author by

Steve

Updated on February 19, 2020

Comments

  • Steve
    Steve about 4 years

    Apparently we use the Scrum development methodology. Here's generally how it goes:

    Developers thrash around trying to accomplish their tasks. Generally the tasks take most of the sprint to complete. QA pesters Dev to release something they can test, Dev finally throws some buggy code out to QA a day or two before the sprint ends and spends the rest of the time fixing bugs that QA is finding. QA can never complete the tasks on time, sprints are rarely releasable on time, and Dev and QA have a miserable few days at the end of the sprint.

    How is scrum supposed to work when releasable Dev tasks take up most of the sprint?

    Thank you everyone for your part in the discussion. As it's a pretty open-ended question, it doesn't seem like there is one "answer" - there are many good suggestions below. I'll attempt to summarize some of my "take home" points and make some clarifications.

    (BTW - Is this the best place to put this or should I have put it in an 'answer'?)

    Points to ponder / act on:

    • Need to ensure that developer tasks are as small (granular) as possible.
    • Sprint length should be appropriately based on average task length (e.g. sprint with 1 week tasks should be at least 4 weeks long)
    • Team (including QA) needs to work on becoming more accurate at estimating.
    • Consider doing a separate QA sprint in parallel but off-set if that works best for the team
    • Unit testing!
  • Steve
    Steve over 15 years
    Let's say the most granular testable task typically takes 1 week to complete. What is the best length for the sprint? Can you give some more info on what QA creating test cases for Dev would be like?
  • Steve
    Steve over 15 years
    Your description sounds about right. Much of the time the deadline is artificial - gotta look good for the sprint review! In the planning meeting, the number of tasks is based on the non-dev given time estimates. Dev can't well quote due to lack of info on the tasks. How is that supposed to work?
  • flicken
    flicken over 15 years
    You can split any task into smaller pieces. For example, try doing 1) write interface, 2) implement one method in interface, 3) implement rest of interface.
  • flicken
    flicken over 15 years
    1 week tasks? I'd suggest 1 month sprints.
  • flicken
    flicken over 15 years
    QA can create tests based on interfaces, working with Dev to ensure that tests reflect any requirements. Test-driven development (TDD).
  • Alan Hensel
    Alan Hensel over 15 years
    I agree. The red flag for me is QA "pestering" Dev. They should be quietly inserting new defects (from any previous iteration) into the current iteration that the developers must at least investigate before picking up a new task. This works well if developers pick up multiple tasks per iteration.
  • Alan Hensel
    Alan Hensel over 15 years
    Part of the reason you ask dev for estimates (rough, factor-of-2 estimates) is to make them ask questions and achieve clarity. How can you call it "planning" if rough, factor-of-2 level clarity has not been achieved?
  • Brad Bruce
    Brad Bruce over 15 years
    We schedule time at the end of each sprint to fix issues discovered in the previous one. Sometimes it gets a little heated though, when developers have built on and compounded errors.
  • quamrana
    quamrana over 15 years
    Mike, Can you tell us more about the issues that arise when you work like that? We have considered it where I work, but managers are afraid of trying it.
  • anon_swe
    anon_swe over 15 years
    If QA a working one iteration behind the rest of the team - how do they decide what's "done"? Sounds like it would be a recipe for driving problems from iteration N to iteration N+1 (where QA find them to iteration N+2 (when they're pushed back to dev)...
  • testerab
    testerab over 13 years
    Sounds like waterfall to me. Interesting to see it the week after this: xprogramming.com/?p=2053
  • Jess Telford
    Jess Telford over 12 years
    +1 for using practical experience as your examples.
  • ALEXintlsos
    ALEXintlsos over 9 years
    We have an offshore QA team with low to moderate skill levels, and since we have low expectation that they will be able to correctly identify defects, it's up to us developers to make sure testing is done as completely as possible. This has worked very well for me personally, and the onshore QA team manager has exhibited some of ismatt's "flabbergasted" behavior. I totally agree that we should be putting separate QA teams out of business, and I also agree that's impossible, because developers are human too.
  • user3001801
    user3001801 over 9 years
    The question I have about this approach is where the testers log bugs. Are they allowed to insert them into the current sprint? The recently completed sprint? The product backlog? All of these seem problematic.
  • learnerplates
    learnerplates over 5 years
    Your Definition of Done must be complete, detailed and accurate. Include the test step in the Definition of Done. Involve the dedicated tester from the beginning of development i.e. the test rep should help flesh out the bug of feature and help identify the end test steps. The test is included in the estimate. This way there are no surprises for developer or test rep. And the use of the term test rep is not accurate we are all Software Deliverers now :)