The New Model and Testing v Checking

First published 05/10/2015

It was inevitable that people would compare my formulation of a New Model for Testing with the James Bach & Michael Bolton demarcation of or distinction between 'testing versus checking'. I've kind of avoided giving an opinion online, although I have had face to face conversations with people who both like and dislike this idea and expressed some opinions privately. The topic came up again last week at StarWest so I thought I'd put the record straight.

I should say that I agree with Cem Kaner's rebuttal of the testing v checking proposal. It is clear to me that although the definition of checking is understandable, the definition of testing is less so, evidenced by the volume of comments on the authors' own blogs. In my discussions with people who support the idea, it was easy to agree on checking as a definition, but testing, as defined, seemed much harder for them to defend in the face of experience.

Part of the argument for highlighting what checking is, is to posit that we cannot rely on checking, particularly with tools, alone. Certainly, brainless use of tools to check – a not infrequent occurrence – is to be decried. But then again, brainless use of anything… But it is just plain wrong that we cannot rely on automated tests. Some products, cannot be tested in any other way. Whether you like it or not – that's just the way it is.

One reason I've been reticent on the subject is I honestly thought people would see through this distinction quickly, and it would be withdrawn or at least refined into something more useful.

Some, probably many, have adopted the checking definition. But few have adopted the testing definition, such as it is, and debated it with any conviction. It looks like I have to break cover.

It would be easy to compare exploration in the New Model with the B & B view of testing and my testing with their view of checking. There are some parallels but comparing them only serves to highlight the differences in perspectives of the authors. We don't think the same, and that's for sure.

From my perspective, perhaps the most prominent argument against the testing v checking split is the notion that somehow testing (if it is simply their label for what I call exploration) and checking are alternatives. The sidelining of checking as something less valuable, intellectual or effective doesn't match experience. The New Model reflects this in that the tester explores sources of information to create models that inform testing. Whether these tests are in fact checks is important, but the choice of scripting as a means of recording a test for use in execution (by tools or people) is one of logistics – it is, dare I say, context-specific.

The exploration comes before the test. If you do not understand what the system should (or should not) do, you cannot formulate a meaningful test. You can enquire what a system might do, but who is to say whether that behaviour is correct or otherwise, without some input from a source of knowledge other than the system itself. The SUT cannot be its own test oracle. The challenges you apply during exploration of the SUT are not tests – they are trials of your understanding (your model) of the actual behaviour of the SUT.

Now, in most situations, it is extremely hard to trust a requirement, however stated, is complete, correct, unambiguous – perfect. In this way, one might never comfortably decide the time is right for testing or checking (as I have scoped it). The New Model implies one should persevere to improve the requirements/sources and aligned with your mental model before you commit to (coding or) testing. Of course, one has to make that transition sometime and that's where the judgement comes in. Who can say what that judgement should be except that it is a personal, consensual or company-cultural decision to proceed.

Exploration is a dynamic activity, whereby you do not usually have a fully formed view of what the system should do, so you have to think, model and try things based on the model as it stands. Your current model enables you to make predictions on behaviour and to try these predictions on the SUT or stakeholders or against any other source of knowledge that is germane to the challenge at hand.

Now, I fully appreciate our sources of knowledge are fallible. This is part and parcel of the software development process and why there are feedback loops in (my version of) exploration. But I argue that the exploration part of the test process (enquiring, modelling, predicting and challenging) are the same for developers as they are for testers.

The critical step in transitioning from exploration to testing, or in the case of a developer, to writing the code based on their understanding (synonymous with the 'model' in their head) is where the developer or tester believes they understand the need and trust their model or understanding. Until that moment, they remain in the exploration state, are uncertain to some degree and are not yet confident (if that is the right term) that they could decide whether a system behaviour is correct or not or just 'interesting'.

If a developer or tester proceeds to coding or testing before they trust their model, then it's likely the wrong thing will be built or tested or it will be tested badly.

Now just to take it further, a tester would not raise a defect report while they are uncertain of the required behaviour of the SUT. Only when they are confident enough to test would it be reasonable to do so. If you are not in a position to say 'this works correctly according to my understanding of the requirement (however specified)' you are not testing, you are exploring your sources of information or the SUT.

In the discussion above, the New Model appears to align with this rather uncertain process called exploration.

Now, let's return to the subject of testing versus checking. Versus is the wrong word, I am sure. Testing and checking are not opposed or alternatives. Some tests can be scripted in some way, for the use of people or automated tools. In order to reach the point in one's understanding to flip from exploration to testing, you have to have done the ground work. In many ways, it takes more effort, thinking, modelling to reach the requisite understanding to construct a script or procedure to execute a check than to just learn what a system does through exploration, valuable though that process is.

As an aside, it's very hard to delineate where scripted and unscripted testing begin and end anyway. If I say, 'test X like so, and keep your eyes open' is that a script or an exploratory charter?

In no way can checking be regarded as less sophisticated, useful, effective or requiring less effort than testing without a script. The comparison is spurious as for example, some systems can be tested in no other way than with scripted tooling. In describing software products, Philip Armour (in his book, 'the Laws of Software Process') says that software is not a product, but rather 'the product is the knowledge contained in the software'. Software is not a product, it is a medium.

The only way a human can test this knowledge product is through the layers of hardware (and software) that must be utilised in very specific ways. In almost every respect, testing is dependent on some automated support. So, as Cem says, at some level, 'all tests are automated... all tests are manual'.

Since the vast majority of software has no user interface, it can only ever be checked using tools. (Is testing in the B & B world only appropriate to a user interface then?) On the other hand, some user interfaces cannot be automated in any viable way (often because of the automated technology between human and the knowledge product). That's life, not a philosophical distinction.

The case can be made that people following scripts by rote might be blinkered and miss certain aspects of incorrect behaviour. This is certainly the case, especially if people are asked to follow scripts blindly. In all my experience of testing in thirty years no tester has ever been asked to be so blinkered. In fact, the opposite is often true, and testers are routinely briefed to look out for anything anomalous precisely to address the risk of oversight. Of course, humans make mistakes and oversight is an inevitability. However, it could also be said that working to a script makes the tester more eagle-eyed – the formality of scripted testing, possibly witnessed (akin to pairing, in fact) is a serious business.

On the other hand, people who have been asked to test without scripts, might be unfocused, lazy almost, unthinking and sloppy. They are hard to hold to account as a list of features or scenarios to cover in some way in a charter, without thinking or modelling is unsafe.

What is a charter anyway? It's a high level script. It might not specify test steps, but more significantly it usually defines scope. An oversight in a scripted test might let a bug through. An oversight in a charter might let a whole feature go untested.

Enough. The point is this. It is meaningless and perverse to compare undisciplined, unskilled scripted or unscripted testing with its skilled, disciplined counterpart. We should be paying attention to the skills of the testers we employ to do the job. A good tester, investing the effort on the left hand side of the New Model will succeed whether they script or not. For that reason alone, we should treat the scripted/unscripted dichotomy as a matter of logistics, and ignore it from the perspective of looking at testers skills.

We should be thankful that, depending on your project, some or all testing can be scripted/automated, and leave it at that.

Tags: #testingvchecking #NewModelTesting

Paul Gerrard My linkedin profile is here My Mastodon Account