In this blog, I want to share my experiences in organising a bug hunt. Testers at my organisation formed teams, who then set about testing a piece of the software that my own team had developed. It was a great learning experience for everyone involved, and something I’d highly recommend.
What is a Bug Hunt?
During a bug hunt, a Test Owner presents two teams (of two or more people) with a piece of software to be tested. The Test Owner provides some basic information, and the teams get to work testing the product. At the end of the session, they report their findings back to the Test Owner. The activity can be seen as training in how to organise and communicate testing, but is also a fun way to learn a new piece of software.
How Do You Organise a Bug Hunt?
Here are a few insights from my first experience as Test Owner. One of the hardest things for me was estimating how complex to make the test object. The most enriching on the other hand was seeing other testers dissect software that I had helped build. Because the teams were distributed, I also picked up tips on how to make working remotely go smoother.
Your experience will undoubtedly be a personal one. Nevertheless, the tips below will make the first time much easier.
1. Prepare the test object
Each team needs an enviroment where they can get straight to work. Use a stable, released version for your test. Provide a database for each team with some basic information filled in (in this case a range of objects, settings and users with logins to support my user stories).
Ensure that your personas have a user with the correct roles and permissions. Provide login data for these. Let the teams know where they can find the test version and corresponding database. In competitions I’ve been at where this was not done already, teams spent a lot of valuable time just getting logged in. Preparing the database will let teams get straight to work on testing.
TIP: have your test environment prepared before your presentation. Use it to demo your user stories. That way you know it’s fit for use by the testers.
2. Introduce the test object
To begin with I prepared a demo type presentation. I noticed that this covered a lot of features and specifications in a relatively short time. However, because it is all new, teams are unlikely to remember all of it first time round.
TIP: Prepare a complementary document that contains a brief outline of the purpose and function of your test object. Use personas to highlight the most common user stories and don’t make specifications too detailed.
3. Provide documentation
I did not provide extensive specifications, as I thought this would limit the testers’ freedom and creativity in approaching the test object. A long list of specifications takes a long time to read and tends to push testing towards simply executing the specifications.
Instead I used user stories and bullet points to describe what the user wants to achieve, allowing the teams to choose their own starting point and structure. Aim for one to one and a half pages at a maximum.
4. Define the Scope
Although I mentioned certain parts of the software, I did not specifically limit the scope in my initial description. I was curious to see how the teams reacted to this ambiguity. You can always choose to intervene and limit the scope if needed (it probably will be necessary). One or two complex features are enough. For example, my documentation stated that objects appeared elsewhere in the software, but teams did not have time to look into this. In future I would leave these out, and perhaps even limit personas to two.
TIP: Don’t make the scope too big, but do allow for slightly more than would fit in the allotted time. This forces teams to prioritise as part of their test planning.
5. Go! The teams create & execute their Test Plan (2 hrs)
Next, the teams need to create a plan of attack. What are they going to test and how? It’s up to the teams to create their own plan.
Explain that you are kind of a product owner who can answer questions. At this point you are not at liberty to discuss existing bugs, describe test cases or tell them what to test. You are able to explain how settings work and talk about the personas work, for example.
It’s very interesting to see how people approach this. Do they make a list? A mind map? How do they prioritise? Which questions do they ask? Do they know everything they want to test beforehand? Do they timebox? Or just start somewhere and see how far they get? Do they try to test everything? Do they ask about risk? Do they note down situations beforehand, or just go freestyle?
When the teams present their test plan, feel free to point out particularly risky areas. This is the time to limit the scope (what to test and what not to test) if needed.
TIP: Don’t steer the team in their decisions, but do clarify the scope or user stories
6. Evaluation (0.5 hrs)
During the evaluation, you can ask teams how they went about testing. Each team may present their approach and answer the questions above in front of the group. Alternatively, you can ask the questions yourself. Take note of things they did differently from your own approach. What can you learn from that?
Take note of any issues or bugs found during testing. You may need to present these to your Scrum team and product owner.
At this point, you can also let the testers know whether an issue they found is already on your backlog or not. You might also explain why certain decisions were made. This has the benefit that others learn about your team’s features, but also that you learn new things about your own features.
Questions for the teams during the evaluation
When asking questions, try not to be leading. Ask open questions (those that don’t elicit a yes or no). Instead of asking ‘did you think about what happens when x’, you could ask ‘Which situations have you considered?’. The following open questions (and any others you think of) will help you evaluate their approach:
Before
- How did they prioritise?
- Did they prioritise?
- How did they decide where to start?
- Has the team thought about paths?
- Was risk taken into account? Why or why not?
During
- How did they collaborate? (Sit together, test together, test individually, timebox, etc.)
- Which tools did they use?
- How did they take notes?
- How did they keep track of what has been tested?
- How did they keep track of time?
- How and at which point did they note down issues?
Have fun organising your Bug Hunt!
Remember that this is a learning experience for everyone involved. Sharing approaches, methods and ideas lets us all learn from each other.
To conclude your bug hut, thank your testers for their efforts, and make sure you follow up on the issues they reported.
Good luck, and enjoy the experience!