My Take on the terms Manual v Automated Testing and How Testers Think
Alan Julien sent me a Linkedin message asking me to consider his LinkedIn post and the many comments that resulted.
Being the lazy person I am, and seeing there were more than two hundred comments, I copied the post and as many comments I could get onscreen, and gave it to ChatGPT and asked for a summary. Here is that summary.
Summary of the Debate on Software Testing Terminology: Manual vs. Automated Testing
The discussion, initiated by Alan Julien, critiques the terms manual and automated testing as misrepresentative of the intellectual and strategic nature of software testing. Many industry professionals support the idea that these labels oversimplify the discipline and propose alternative terminology to better reflect the depth of software testing.
Key Themes in the Debate:
1. The Misconception of “Manual Testing”
- Many argue that manual testing implies repetitive, low-skill work when, in reality, it involves critical thinking, analysis, investigation, and risk assessment.
- Testers engage in exploratory testing, problem-solving, and strategic planning, making the term “manual” misleading.
- Several professionals note that testing has never been purely “manual”—tools have always assisted testing efforts.
2. “Automated Testing” is Not Fully Automated
- The term automated testing suggests that testing can run independently of human intervention, which is not accurate.
- Automation requires human creativity, scripting, maintenance, and analysis to be effective.
- Many argue that “automated testing” should more accurately be called automated test execution since testing itself is a cognitive task.
3. Historical Origins of the Terms
- Some trace the distinction to early test automation tool vendors (such as Mercury and Segue) who promoted their products by contrasting automation with “manual” testing.
- The terminology was commercially driven and stuck due to marketing, HR practices, and job descriptions.
4. The Perceived Value and Pay Disparity
- Automation roles are often paid higher than those labeled as “manual testers,” reinforcing the misconception that automation is superior.
- This devalues the role of exploratory, risk-based, and analytical testing, which is essential for identifying critical defects.
5. Alternative Terminology Proposed
- Many contributors suggest renaming these categories to emphasize cognitive and strategic aspects rather than execution methods:
- “Exploratory Testing” vs. “Scripted Testing”
- “Human-led Testing” vs. “Tool-assisted Testing”
- “Feature Testing” vs. “Regression Testing”
- “Cognitive Testing” vs. “Autonomous Testing”
- “Augmented Testing” (human + AI collaboration)
- “Experiential Testing” vs. “Programmatic Testing”
- Some propose removing the labels entirely and just calling it testing, allowing for a focus on methodology rather than execution style.
6. Impact of AI on the Terminology Debate
- AI-driven testing blurs the line further, making distinctions between “manual” and “automated” even less relevant.
- Some advocate for terms like AI-assisted testing, rather than maintaining old binaries.
Conclusion: The Need for Change
- The consensus is that the terms manual and automated testing are outdated, misleading, and limit the profession.
- Industry professionals advocate for a shift in language to better reflect the intellectual and strategic contributions of testers.
- The challenge remains in changing ingrained industry terminology, but many believe it is a battle worth fighting.
This debate highlights a broader effort to ensure software testing is recognized as a skilled, intellectual profession rather than just an execution-based task.
I agree with the summary above and that the terms Manual V Automated aren’t helpful.
For some years – maybe fifteen – I have advocated we look at testing from two standpoints: – How we think about testing – How we choose to manage the logistics of testing
You could say thinking is the strategy and logistics are tactics, but I favour a purist attitude. That IF (and only if) we separate the thinking from the practical aspects of how we strategise, prioritise, design, select, hypothesise our tests, and review the outcomes of tests after the fact, we can deal with the logistics in a more sensible way.
To me, a test is designed by a human. (Maybe the human uses a tool, Word, Excel, code e.g. Python... to help). Now this test could be an idea or a 50 page document describing an immensely complex procedure to implement an end-to-end test in a very complex environment. I don’t care. The thought process (not the thoughts themselves) is the same – the content and scale can be very different, obviously.
Whether we execute a test with tools or ‘manually’ or by some other chosen piece of magic is irrelevant, if we consider the thought process as universal.
How we implement/execute a test – preparing environments, obtain/configure test data, run tests, validate results, analyse the outputs or results data, cleanup environments and so on are logistical choices we can make. We might do some of these tasks without tools or with tools, with tools performing some or all the logistics.
Some tasks can only be done 'manually' – that is, using our brains, pencil and paper, whiteboard, or other aids to capture our ideas, test cases even. Or we might keep all that information in our heads. Other tasks can only be performed with tools – every environment, application, stakeholder, goals and risk profiles are different, so we need to make choices on how we actually ‘make the tests happen’.
Some tests, for example APIs might be executed using a browser, typing URLs into the search bar, or code, or a dedicated tool. All these approached require technology. But the browser is simply the UI we use to access the APIs. Are these tests manual or atomated? It's a spectrum. And our actual approach is our choice. The manual v automated label blurs the situation. But it's logistics that are the issue, not testing as a whole.
So. I believe there is – at some level of abstraction, and with wide variation – a perspicacious thought process for all tests. The choices we make for logistics varies widely depending on our context. You might call this a ‘context-driven approach’. (I wouldn’t, as I believe all testing is context-driven). You might not ‘like’ some contexts or the approaches often chosen in those contexts. I don’t care – exploratory v scripted testing is partly a subjective, and partly a contractual and/or a cultural call (if you include contractual obligations, or organisational culture in your context. Which obviously, I believe you should).
I use the model to explain why, for example, test execution is not all testing (automated execution isn't the be all and end all). I use the model to explain the nuanced difference between improvised testing and pre-scripted testing. (All testing is exploratory is an assumption of the New Model). I use the model to highlight the importance of 'sources of knowledge', 'challenging requirements', 'test modelling' and other aspects of testing that are hard to explain in other ways.
Like all models, the New Model is wrong, but I believe, useful.
If you want to know more about the challenges of testing terminology and the New Model thought process – take a look at my videos here:
Intro to the Testing Glossary Project
The New Model for Testing explains thinking and logistics
Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account