My Take on the terms Manual v Automated Testing and How Testers Think

Alan Julien sent me a Linkedin message asking me to consider his LinkedIn post and the many comments that resulted.

Being the lazy person I am, and seeing there were more than two hundred comments, I copied the post and as many comments I could get onscreen, and gave it to ChatGPT and asked for a summary. Here is that summary.


Summary of the Debate on Software Testing Terminology: Manual vs. Automated Testing

The discussion, initiated by Alan Julien, critiques the terms manual and automated testing as misrepresentative of the intellectual and strategic nature of software testing. Many industry professionals support the idea that these labels oversimplify the discipline and propose alternative terminology to better reflect the depth of software testing.

Key Themes in the Debate:

1. The Misconception of “Manual Testing”

2. “Automated Testing” is Not Fully Automated

3. Historical Origins of the Terms

4. The Perceived Value and Pay Disparity

5. Alternative Terminology Proposed

6. Impact of AI on the Terminology Debate


Conclusion: The Need for Change

This debate highlights a broader effort to ensure software testing is recognized as a skilled, intellectual profession rather than just an execution-based task.


I agree with the summary above and that the terms Manual V Automated aren’t helpful.

For some years – maybe fifteen – I have advocated we look at testing from two standpoints: – How we think about testing – How we choose to manage the logistics of testing

You could say thinking is the strategy and logistics are tactics, but I favour a purist attitude. That IF (and only if) we separate the thinking from the practical aspects of how we strategise, prioritise, design, select, hypothesise our tests, and review the outcomes of tests after the fact, we can deal with the logistics in a more sensible way.

To me, a test is designed by a human. (Maybe the human uses a tool, Word, Excel, code e.g. Python... to help). Now this test could be an idea or a 50 page document describing an immensely complex procedure to implement an end-to-end test in a very complex environment. I don’t care. The thought process (not the thoughts themselves) is the same – the content and scale can be very different, obviously.

Whether we execute a test with tools or ‘manually’ or by some other chosen piece of magic is irrelevant, if we consider the thought process as universal.

How we implement/execute a test – preparing environments, obtain/configure test data, run tests, validate results, analyse the outputs or results data, cleanup environments and so on are logistical choices we can make. We might do some of these tasks without tools or with tools, with tools performing some or all the logistics.

Some tasks can only be done 'manually' – that is, using our brains, pencil and paper, whiteboard, or other aids to capture our ideas, test cases even. Or we might keep all that information in our heads. Other tasks can only be performed with tools – every environment, application, stakeholder, goals and risk profiles are different, so we need to make choices on how we actually ‘make the tests happen’.

Some tests, for example APIs might be executed using a browser, typing URLs into the search bar, or code, or a dedicated tool. All these approached require technology. But the browser is simply the UI we use to access the APIs. Are these tests manual or atomated? It's a spectrum. And our actual approach is our choice. The manual v automated label blurs the situation. But it's logistics that are the issue, not testing as a whole.

So. I believe there is – at some level of abstraction, and with wide variation – a perspicacious thought process for all tests. The choices we make for logistics varies widely depending on our context. You might call this a ‘context-driven approach’. (I wouldn’t, as I believe all testing is context-driven). You might not ‘like’ some contexts or the approaches often chosen in those contexts. I don’t care – exploratory v scripted testing is partly a subjective, and partly a contractual and/or a cultural call (if you include contractual obligations, or organisational culture in your context. Which obviously, I believe you should).

I use the model to explain why, for example, test execution is not all testing (automated execution isn't the be all and end all). I use the model to explain the nuanced difference between improvised testing and pre-scripted testing. (All testing is exploratory is an assumption of the New Model). I use the model to highlight the importance of 'sources of knowledge', 'challenging requirements', 'test modelling' and other aspects of testing that are hard to explain in other ways.

Like all models, the New Model is wrong, but I believe, useful.

If you want to know more about the challenges of testing terminology and the New Model thought process – take a look at my videos here:

Intro to the Testing Glossary Project

The New Model for Testing explains thinking and logistics

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account