Test Driven Development: holiday for the tester!
In many software development processes, testing comes at the end of a delivery. In scrum environments, testing is even often left to the developers. The term 'multifunctional team' gives the client or employer the impression that the developers can also carry out the tests. Whether with or without a tester, in scrum or projects that are secretly traditionally driven, it often goes as follows: 'Almost ready for production, oh wait, give it a quick test'. And (between the lines): 'Make sure you don't find overly large problems, because we promised the client we would deliver tomorrow'.
A good testing process is essential for quality. And quality takes time. It pays out in the end but often isn't a priority in the short term.
The tail-end of every process.
Not with Test Driven Development. In this philosophy, testing comes at the beginning. 'No test, no code', I recently heard someone say about TDD. It's a starting point that many testers can only dream about. The great thing about this philosophy is that the developers write the tests themselves. It's about unit tests that validate the code at the lowest level.
What does the TDD process look like?
The TDD developing cycle, generally carried out by a developer, goes as follows:
- Run the existing tests (if there are any; you haven't got anything yet by the first run) and see if they all pass.
- Add a new test that tests the bit of code you still have to add.
- Run the tests again and validate that the new test fails.
- Then add the new bit of code.
- Validate that the new test passes (and the existing tests too, still).
And an abundance of advantages
Testing becomes a 'Do it yourself, beforehand' for the developing team instead of 'Oh right, we have to test too' at the end of the developing process, by the tester who has had to wait for a long time. If this method is applied consistently, every bit of new code will contain a test that passes. Complete test coverage, 100%. A dream come true for the tester. They can go on holiday... for good!
Does TDD on its own indeed guarantee quality?
You might be thinking that the endless holiday is in sight. Complete coverage with unit tests sounds good and very reliable; the bottom layer, the foundation of the test pyramid, has been thoroughly checked. And yet, there are some convictions that call this statement into a question and show that test work is more than just TDD alone. No holiday for the tester, unfortunately. A test-driven development project still has some questions for our tester:
1. Which unit tests should be carried out?
In practice, it appears that if TDD is deployed, it isn't followed as thoroughly as described above. That not all code is tested beforehand. So choices have to be made about which code gets a test and which doesn't. The best way of determining this is to look at the business value of that part of the system. The product owner, with the team (including the tester) as an important advisor, must determine which components of the software have so much added value that tests are indispensable. That means that they're also determining which components can go without testing. If the tests are subsequently implemented according to that strategy, it's necessary that the tester regularly monitors whether the tests are in fact-checking the agreed parts of the system.
2. What do the unit tests actually do?
Not only that, but the tests themselves also have to be checked. It's not enough to have a test for the important bits of code. A test can contain an error; it can happen, for example, that the test always gives an 'OK' outcome or does or measures something that doesn't at all verify the effect of the code. The tester has to help the team by observing and asking questions about what the tests do.
3. What should the system actually do, as a whole?
According to the possibly outdated waterfall approach, the right test should be carried out at each level. When writing code, those are unit tests; after that come chain or integration tests followed by the end-to-end tests. Admittedly, if you develop test-driven, you can assume that the components built are good, but it's still necessary to carry out chain tests. These chain tests check the integration between the components; are the components actually working together to achieve the desired effect of the system as a whole? With the end-to-end tests, which come on top of that, you run through the user-driven scenarios, in the UI of the system, to see whether the system lives up to the expectations described in the user story.
Still no tropical paradise….
All in all, enough for a tester to do. A TDD process isn't in itself a guarantee of quality, unfortunately. However, when applied properly, the TDD method does genuinely put more emphasis on testing. It gives the tester a point of departure from which to participate in the developing process. And the tester still must carry out the chain and end-to-end tests.
So the tropical paradise, although still far away, is now a bit closer.
My thanks to colleague Daniël for his input from the practical side of things and for the constructive discussions we had on this subject.
Test Driven Development: holiday for the tester!
Test Driven Development: holiday for the tester!