testing and uncertainty
First published 24/09/2009
the interpretation of a test by a human being is influenced by the state of mind of the tester before that test is run.
Example: if we run a test and experience a failure: – if the failure is in the area we are 'looking for bugs' e.g. a newly written piece of code, we are disappointed but not unnerved (say) because we expect to find bugs in new code – and that is the purpose of the test. – if the failure is in an area we trust, to the degree that we have no specific tests in that area. The failure is unnerving, undermining our confidence. e.g. suppose we experience a database or operating system failure. This undermines our confidence in the platform as a whole. It also challenges our assumptions that the platform weas reliable (so we didn't have any DB or OS tests planned).
Our pre-disposition to some software align closely with out perception of risk. If we perceive the likelihood of failure of a platform or component is low (even though the impact is catastrophic) we are unlikely to test that platform or component. Our thinking is, “we are so unlikely to expose failure here – why bother?” We might also attribute the notion of (bad) luck to such areas. “If that test exposed a bug, we'd be so unlucky” By so doing, we've pre-conditioned ourselves to prepare tests that have a good chance of finding a bug. In fact, the mainstream teaching on test design techniques presses this very point: “techniques have a good chance of finding a bug”.
Tags: #ALF
Paul Gerrard My linkedin profile is here My Mastodon Account