From Wall to Test Plan: bringing programmers and testers together

Posted by Sander Koning on July 26, 2016

In my early programming days at TOPdesk, communication with the Testing department was deceptively easy: throw the feature over the wall, and some time later receive a notification that it was okay (or not). Luckily we learned that the easy way is not always the best way, and we introduced scrum and agile techniques.

This shortened the lines of communication and the contact between programmers and testers became a lot more intensive. We started discussing the story details and the biggest risks together. With everything written down, while the programmers were writing the code, the testers could already start writing test specifications. Next to writing unit and component tests, the programmer could already do some manual testing themselves based on these specifications, before leaving it to the tester to take a second critical look.

Not long after, we introduced automated GUI tests, on top of our unit and component tests. This did not only increase our test coverage, but also our confidence in that all features were tested well. As the automation frameworks took over the repetitive work of clicking through the GUI, the testers were able to do a smarter job, thinking about what should be tested exactly.

Boy, were we happy to write automated tests. Yet in our enthusiasm, it took a while for us to realise that, as a team, we did not have a clear overview of what we were testing. Every individual member who had been working on a task, could tell what tests they wrote and what was covered. Also when something was tested manually, the responsible person would know. But looking at the user story specifications, or the test tasks, there was no way of immediately understanding the status. This meant that sometimes we were over-testing, spending more time testing a feature than needed. For example, someone would manually test a feature that was already covered by an automated test. Or we would forget an aspect, as both parties would assume it to be a task for the other one.

Enter the Test Plan.

Each user story we pick up now contains a task for creating a test plan when the specifications are complete. The tester and one or two programmers sit together to make explicit what we should test on which level of detail. The plan starts with noting the risk level we assigned to the user story, a number from 1 to 5 that gives a guideline for how thoroughly the user story needs to be tested – 1 for critical paths only, 5 for very extensive tests.

Then we write down which features we want to test on which level. We identify the critical paths, main paths, alternative paths, and corner cases. This helps us in determining at which level of detail we should write certain tests. It is probably a waste of time to devote an entirely new end-to-end test scenario to a corner case. On the other hand, we should make really sure that we get immediate and complete feedback whenever something is wrong with any critical path.

This usually takes 10 to 20 minutes, depending on the size of the user story. In the end, we have a complete overview of how the story is going to be tested. The test specifications are then created based on this plan, and like before, these in turn lead to the actual tests. But unlike before, with a test plan in place before implementation starts, working together to make sure a user story is tested has become one of the easiest parts of the process.

 

About the author: Sander Koning

More Posts