Re-Thinking System-Level Testing – Introduction

First published 18/09/2010

This is the first in a series of short essays in which I will set out an approach to test design, preparation and execution that involves testers earlier, increases their influence in projects, improves baseline documents and stability, reduces rework and increases the quality of system and acceptance testing. The approach needs automated support and the architecture for the next generation of test management tools will be proposed. I hope that doesn’t sound too good to be true and that you’ll bear with me.

Some scene-setting needs to be done...

In this series, I’m focusing on contexts (in system or acceptance testing) where scripted tests are a required deliverable and will provide the instructions in the form of scripts, procedures (or program code) to execute tests. In this opening essay, I’d like to explore why the usual approach to building test scripts (promoted in most textbooks and certification schemes) wastes time, undermines their effectiveness and limits the influence of testers in projects. These problems are well-known.

There are two common approaches to building scripted tests:

  1. Create (manually or automated) test scripts directly from a baseline (requirement or other specification documents). The scripts provide all the information required to execute a test in isolation.
  2. Create tabulated test cases (combinations of preconditions, data inputs, outputs, expected results) from the baseline and an associated procedure to be used to execute each test case in turn.
By and large, the first approach is very wasteful and inflexible and the tests themselves might not be viable anyway. The second approach is much better and is used to create so called ‘data-driven’ manual (and automated) test regimes. (Separating procedure from data in software and tests is generally a good thing!) But both of these approaches make two critical assumptions: In theory, a long term, document-intensive project with formal reviews, stages and sign-offs could deliver stable, accurate baselines providing all the information that system-level testers require. But few such projects deliver what their stakeholders want because stakeholder needs change over time and bureaucratic projects and processes cannot respond to change fast enough (or at all). So, in practice, neither assumption is safe. The full information required to construct an executable test script is not usually available until the system is actually delivered and testers can see how things really work. The baseline is rarely stable anyway: stakeholders learn more about the problem to be solved and the solution design evolves over time so ‘stability’, if ever achieved, is very late in arriving. The usual response is to bring the testers onto the project team at a very late stage.

What are the consequences?

When testers arrive on a project late they are under-informed and misinformed. They are isolated in their own projects. Their sources of knowledge are unreliable: the baseline documents are not trustworthy. Sources of knowledge may be uncooperative: “the team is too busy to talk to you – go away!”

Does this sound familiar to you?

That’s the scene set. In the next essay, I’ll set out a different vision.

Tags: #Essaysontestdesign

Paul Gerrard My linkedin profile is here My Mastodon Account