Should Most of What we Call Testing Really be Called Checking?
First published 14/12/2011
When the testing versus checking debate started with Michael’s blog here http://www.developsense.com/blog/2009/08/testing-vs-checking/ I read the posts and decided it wasn’t worth getting into. It seemed to be a debate amongst the followers of the blog and the school rather than a more widespread unsettling of the status quo.
I fully recognise the difference between testing and checking (as suggested in the blogs). Renaming of what most people call testing today to checking and redefining testing in the way Michael suggests upset some folk, cheered others. Most if not all developer testing and all testing through an API using tools becomes checking – by definition. I guess developers might sniff at that. Pretty much what exploratory testers do becomes the standard for what the new testing is. So they are happy. Most testers tend not to follow blogs so they are still blissfully unaware of the debate.
Brian Marick suggested the blogs were a ‘power play’ in a Tweet and pointed to an interesting online conversation here http://tech.groups.yahoo.com/group/agile-testing/message/18116. The suggested redefinitions appear to underplay checking and promote the virtue of testing. Michael clarified his position and said it wasn’t here http://www.developsense.com/blog/2009/11/merely-checking-or-merely-testing/ and said:
“The distinction between testing and checking is a power play, but it’s not a power play between (say) testers and programmers. It’s a power play between the glorification of mechanizable assertions over human intelligence. It’s a power play between sapient and non-sapient actions.”
In the last year or so, I’ve had a few run-ins with people and presenters at conferences when I asked what they meant by checking when they used the word. They tended to forget the distinction and focus on the glorification bit. They told me testing was good (“that’s what I get paid for”) and checking was bad, useless or for drones. I’m not unduly worried by that – but it’s kind of irritating.
The problem I have is that if the idea (distinguishing test v check) is to gain traction and I believe it should, then changing the definition of testing is hardly going to help. It will confuse more than clarify. I hold that the scope of testing is much broader than testing software. In our business we test systems (a system could be a web page, it could be a hospital). The word and activity is in widespread use in almost every business, scientific and engineering discipline you can imagine. People may or may not be checking, but to ask them to change the name and description of what they do seems a bit ambitious. All the textbooks, papers and blogs written by people in our business will have to be reinterpreted and possibly changed. Oh, and how many dictionaries around the world need a correction? My guess is it won’t happen.
It’s much easier to say that a component of testing is checking. Know exactly what that is and you are a wiser tester. Sapient even.
The test v check debate is significant in the common exploratory contexts of individuals making decisions on what they do right now in an exploratory session perhaps. But it isn’t significant in the context of larger projects and teams. The sapience required in an exploratory session is concentrated in the moment to moment decision making of the tester. The sapience in other projects is found elsewhere.
In a large business project, say an SAP implementation, there might be ten to twenty legacy and SAP module system test teams plus multiple integration test teams as well as one or several customer test teams all working at a legacy system, SAP module or integrated system level. SAP projects vary from maybe fifty to several thousand man-years of effort of which a large percentage (tens of percent) is testing of one form or another. Although there will be some exploration in there – most of the test execution will be scripted and it’s definitely checking as we know it.
But, the checking activity probably counts for a tiny percentage of the overall effort and much of it is automated. The sapient effort goes into the logistics of managing quite large teams of people who must test in this context. Ten to twenty legacy systems must be significantly updated, system tested, then integrated with other legacy systems and kept in step with SAP modules that are being configured with perhaps ten thousand parameter changes. All this takes place in between ten and thirty test environments over the course of one to three years. And in all this time, business as usual changes on the legacy systems and the system to be migrated and/or retired must be accommodated.
As the business and projects learn what it is about, requirements evolve and all the usual instability disturbs things. But change is an inevitable consequence of learning and large projects need very careful change management to make sure the learning is communicated. It’s an exploratory process on a very large scale. Testing includes data migration, integration with customers, suppliers, banks, counterparties; it covers regulatory requirements, cutover and rollback plans, workarounds, support and maintenance processes as well as all the common non-functional areas.
Testing in these projects has some parallels with a military campaign. It’s all about logistics. Test checking activity compares with ‘pulling the trigger’.
Soldiering isn’t just about pulling triggers. In the same way, testing isn’t just about checking. Almost all the sapient activity goes into putting the testers into exactly the right place at the right time, fully equipped with meaningful and reliable environments, systems under test, integral data and clear instructions, with dedicated development, integration, technical, data, domain expert support teams. Checking may be manual or automated, but it’s a small part of the whole.
Exploration in environments like these can’t be done ‘interactively’. It really could take months and tens/hundreds of thousands of pounds/dollars/euros to construct the environment and data to run a speculative test. Remarkably, exploratory tests are part of all these projects. They just need to be wisely chosen and carefully prepared, just like other planned tests, because you have a limited time window and might not get a second chance. These systems are huge production lines for data so they need to be checked endlessly end to end. It’s a factory process so maybe testing is factory-like. It’s just a different context.
The machines on the line (the modules/screens) are extremely reliable commercial products. They do exactly what you have configured them to do with a teutonic reliability. The exploration is really one of requirements, configuration options and the known behaviour of modules used in a unique combination. Test execution is confirmation but it seems that it can be done no other way.
It rarely goes smoothly of course. That’s logistics for you. And testing doing what it always does.
Tags: #testingvchecking #checking #factory
Paul Gerrard My linkedin profile is here My Mastodon Account