Risk Based Testing and the 'Good Enough' Approach
First published 06/12/2009
Are you ever asked as a tester, “is the system good enough to ship?” Given our normal experience, where we are never given enough time to do the testing, the system cannot be as good as it should be. When the time comes to make the release decision, how could you answer that question? James Bach introduced the idea called ‘Good Enough’ in 1997 (Bach,1997). It is helpful to understanding the risk-based test approach as it seems to hold water as a framework for release decision-making, at least in projects where risks are being taken. So, what is “Good Enough?” and how does it help with the release decision?
Many consultants advocate ‘best practices’ in books and conferences. Usually, they preach perfection and they ask leading questions like, “would you like to improve your processes?”, “do you want zero defects?” Could anyone possibly say “no” to these questions? Of course not. Many consultants promote their services using this method of preaching perfection and pushing mantras that sound good. It’s almost impossible to reject them.
Good enough is a reaction to this compulsive formalism, as it is called. It’s not reasonable to aim at zero-defects in software and your users and customers never expect perfection, so why do you pretend that you’re aiming at perfection? The zero-defect attitude just doesn’t help. Compromise is inevitable and you always know it’s coming. The challenge ahead is to make a release decision for an imperfect system based on imperfect information.
The definition of “Good Enough” in the context of a system to be released is:
- X has sufficient benefits.
- X has no critical problems.
- The benefits of X sufficiently outweigh the problems.
- In the present situation, and all things considered, improving X would cause more harm than good.
- All the above must apply.
To expand on this rather terse definition, X (whatever X is) has sufficient benefits means that there is deemed enough of this system working for us to take it into production, use it, get value, and get the benefit. It has no critical problems. i.e. there are no severe faults that make it unusable or unacceptable. At this moment in time, with all things considered, if we spend time trying to perfect X, that time is probably going to cost us more than shipping early with the known problems. This framework allows us to release an imperfect system early because the benefits may be worth it. How does testing fit into this good enough idea?
Firstly, have sufficient benefits been delivered? The tests that we execute must at least demonstrate that the features providing the benefits are delivered completely, so that we have evidence of this. Secondly, are there any critical problems? Our incident reports give us the evidence of the critical problems and many others too. There should be no critical problems for it to be good enough. Thirdly, is our testing good enough to support this decision? Have we provided sufficient evidence to say these risks are addressed and those benefits are available for release?
It is not for a tester to decide whether the system is good enough. An analogy that might help here is to view the tester as an expert witness in a court of law. The main players in this courtroom scene are:
- The accused (the system under test).
- The judge (project manager).
- The jury (the stakeholders).
- Expert witness (the tester).
In our simple analogy, we will disregard the lawyers’ role. (In principle, they act only to extract evidence from witnesses). Expert witnesses are brought into a court of law to find evidence and present that evidence in a form for laymen (the jury) to understand. When asked to present evidence, the expert is objective and detached. If asked whether the evidence points to guilt or innocence, the expert explains what inferences could be made based on the evidence, but refuses to judge innocence or guilt. In the same way, the software tester might simply state that based on evidence “these features work, these features do not work, these risks have been addressed, these risks remain”. It is for others to judge whether this makes a system acceptable.
The tester simply provides information for the stakeholders to make a decision. Adopting this position in a project seems to be a reasonable one to take. After all, testers do not create software or software faults; testers do not take the risks of accepting a system into production. Testers should advocate to their management and peers this independent point of view. When asked to judge, whether a system is good enough, the tester might say that on the evidence we have obtained, these benefits are available; these risks still exist. The release decision is someone else’s decision to make.
However, you know that the big question is coming your way so when you are asked, “is it ready?” what should you do? You must help the stakeholders make the decision, but not make it for them. The risks, those problems that we thought could occur some months ago, which, in your opinion would make the system unacceptable, might still exist. Based on the stakeholders’ own criteria, the system cannot now be acceptable, unless they relax their perceptions of the risk. The judgement on outstanding risks must be as follows:
- There is enough test evidence now to judge that certain risks have been addressed.
- There is evidence that some features do not work (the feared risk has materialised).
- Some risks remain (tests have not been run, or no tests are planned).
This might seem like an ideal independent position that testers could take but you might think that it is unrealistic to think one can behave this way. However, we believe this stance is unassailable since the alternative, effectively, is for the tester to take over the decision making in a project. You may still be forced to give an opinion on the readiness of a system, but we believe taking this principled position (at least at first) might raise your profile and credibility with management. They might also recognise your role in projects in future – as an honest broker.
REFERENCES Bach, J. (1997), Good Enough Quality: Beyond the Buzzword, in: IEEE Computer, August 1997, pp. 96-98
Paul Gerrard, July 2001
Tags: #risk-basedtesting #good-enough
Paul Gerrard My linkedin profile is here My Mastodon Account