Test Engineering Blogs


Read the latest posts from Test Engineering Blogs.

from Paul Gerrard

First published 01/06/2016

A recent study from the University of Oxford makes for interesting reading:

  • Over the next two decades, 47% of jobs in the US may be under threat.
  • 702 occupations are ranked in order of their probability of computerisation. Telemarketers are deemed most likely (99%), recreational therapists least likely at 0.28%. Computer programmers appear to be 48% likely to be replaced.

If programmers have a 50/50 chance of being replaced by robots, we should think seriously on how the same might happen to testers.

Machine Learning in testing is an intriguing prospect but not imminent. However, the next generation of testing tools will look a lot different from the ones we use today.

For the past thirty years or so we have placed emphasis on test automation and checking. In the New Model for Testing, I call this 'Applying'. We have paid much less attention to the other nine - yes, nine - test activities. As a consequence, we have simple robots to run tests, but nothing much to help us to create good tests for those robots to run. 

In this paper, I am attempting to identify the capabilities of the tools we need in the future.

The tools we use in testing today are limited by the approaches and processes we employ. Traditional testing is document-centric and aims to reuse plans as records of tester activity. That approach and many of our tools are stuck in the past. Bureaucratic test management tools have been one automation pillar (or millstone). The other pillar – test automation tools – derive from an obsession with the mechanical, purely technical execution activity and is bounded by an assertion that many vendors still promote – that testing is just bashing keys or touchscreens which tools can do just as well.

The pressure to modernise our approaches, to speed up testing and reduce the cost and dependency on less-skilled labour means we need some new ideas. I have suggested a refined approach using a Surveying metaphor. This metaphor enables us to think differently on how we use tools to support knowledge acquisition.

The Survey metaphor requires new collaborative tools that can capture information as it is gathered with little distraction or friction. But they can also prompt the user to ask questions, to document their thoughts, concerns, observations and ideas for tests. In this vision, automated tools get a new role – supportive of tester thinking, but not replacing it.

Your pair in the exploration and testing of systems might soon be a robot. Like a human partner, they will capture the knowledge you impart. Over time they will learn how to support and challenge you and help you to navigate through your exploration or Surveying activity. Eventually, your partner will suggest ideas that rival your own. But that is still some way off.

To download the full paper, go to the Tools Knowledge Base.

Tags: #testautomation #TestingTools #Robots #Bots


from Paul Gerrard

First published 09/05/2013

Testing is Long Overdue for a Change

Rumours of the death of testing were greatly exaggerated, but even so, the changes we predict will be dramatic. My own company has been heralding the demise of the 'plain old functional tester' (POFT) for years and we’ve predicted both good and bad outcomes of the technological and economic change that is going on right now. Some time ago, I posted a blog, Testing is in a Mess where I suggested that there's complacency, self-delusion and over capacity in the testing business; there is too little agreement about what testing is, what it’s for or how it should be done.

But there are also some significant forces at play in the IT industry and I think the testing community, will be coming under extreme pressure. I summarise this change as ‘redistributed testing’: users, analysts, developers and testers will redistribute responsibility for testing by, wait for it, collaborating more effectively. Testers probably won’t drive this transition, and they may be caught out if they ignore the winds of change.

In this article, I’ll suggest what we need from the leaders in our industry, the market and our organisations. Of course, some responsibility will fall on your shoulders. Whether you are a manager or technical specialist, there will be an opportunity for you to lead the change.

New Architectures, new Approaches

Much of the software development activity in the next five years or so will be driven by the need for system users and service vendors to move to new business models based on new architectures. One reason SaaS is attractive is that the route to market is so simple that tiny boutique software shops can compete on the same playing field as the huge independent software vendors.

SaaS works as an enabler for very rapid deployment of new functionality and deployment onto a range of devices. A bright idea in marketing in the morning can be deployed as new functionality in the afternoon and an increasing number of companies are succeeding with ‘continuous delivery’. This is the promise of SaaS.

Most organisations will have to come to terms with the new architectures and a more streamlined approach to development. The push and pull of these forces will make you rethink how software available through the Internet is created, delivered and managed. The impacts on testing are significant. If you take an optimistic view, testing and the role of testers can perhaps, at last, mature to what they should be.

The Testing Business has Matured, but Bloated

Over the last twenty years or so there has been a dramatic growth in the number of people who test and call themselves testers and test managers. It’s not that more testing happens. I think it’s because the people who do it are now recruited into teams, having managers who plan, resource and control sizable budgets in software projects to perform project test stages. There is no question that people are much more willing to call themselves a tester. There are now a huge number of career testers across the globe; many have done nothing but testing in their professional lives. The problem is that there may now be too many of them.

In many ways, in promoting the testing discipline as some of us have done for more than twenty years, we have been too successful. There is now a sizable testing industry. We have certification schemes, but the schemes that were a step forwards fifteen years ago, haven’t advanced. As a consequence, there are many thousands of professional testers, certified only to a foundation level who have not developed their skills much beyond test script writing, execution and incident logging. Much of what these people do are basically ‘checking’ as Michael Bolton has called it.

Most checking could be automated and some could be avoided. In the meantime, we have seen (at last) developer testing begin to improve through their adoption of test-driven and behaviour-driven approaches. Of course, most of the testing they do is checking at a unit level. But this is similar to what many POFTs spend much of their time doing manually. Given that most companies are looking to save money, it’s easy to see why many organisations see an opportunity to reduce the number of POFTs if they get their developers to incorporate automated checking into their work through TDD and BDD approaches.

As the developers have adopted the disciplines and (mostly free) tools of TDD and BDD, the testers have not advanced so far. I would say, that test innovation tends to be focused on the testers’ struggle to keep pace with new technologies rather than insights and inventions that move the testers’ discipline forward. Most testing is still manual, and the automated tests created by test teams (usually with expensive, proprietary tools) might be better done by developers anyway.

In the test management space, one can argue that test management is a non-discipline, that is, there is no such thing as test management, there’s just management. If you take the management away from test management – what’s left? Mostly challenges in test logistics – or just logistics – and that’s just another management discipline isn’t it?

What about the fantastic advances in automation? Well, test execution robots are still, well, just robots. The advances in these have tracked the technologies used to build and deliver functionality – but pretty much that’s all. Today’s patterns of test automation are pretty much the same as those used twenty or more years ago. Free test automation frameworks are becoming more commonly used, especially for unit testing. Free BDD tools have emerged in the last few years, and these are still developer focused but expect them to mature in the next few years. Tools to perform end-to-end functional tests are still mostly proprietary, expensive and difficult to succeed with.

The test management tools that are out there are sophisticated, but they perform only the most basic record keeping. Most people still use Excel and survive without test management products that only support the clerical test activities and logistics and do little to support the intellectual effort of testers.

The test certification schemes have gone global. As Dorothy Graham says on her blog the Foundation met its main objective of “removing the bottom layer of ignorance” about software testing. Fifteen years and 150,000+ certificate awards later, it does no more than that. For many people, it seems that this ‘bottom layer of knowledge’ is all they may ever need to get a job in the industry. The industry has been dumbed-down.

Agile: a Stepping Stone to Continuous Delivery

There is an ongoing methodological shift from staged, structured projects to iterative and Agile and now, towards ‘continuous delivery’. Just as companies seem to be coming to terms with Agile – it’s all going to change again. They are now being invited to consider continuous ‘Specification by Example’ approaches. Specification by example promotes a continual process of specification, exampling, test-first, and continuous integration. CI and Delivery is the heartbeat, the test, life-support and early warning system. The demands for better testing in development are being met. A growing number of developers have known no other way. If this trend continues, we will get better, stable software sooner and much of the late functional checking done by system testers may not be required. Will this reduce the need for POFT testers? You bet.

But, continuous delivery is a machine that consumes requirements. For the rapid output of continuous delivery to be acceptable, the quality of requirement going into that machine must be very high. We argue that requirements must be trusted, but not perfect.

Testers are Being Squeezed

Developers are increasingly taking on the automated checking. Some business analysts are taking their chance and absorbing critical disciplines into analysis and are taking over the acceptance process too. Combined, the forces above are squeezing testers from the ‘low-value’ unskilled, downstream role. To survive, testers will have to up-skill to upstream, business-savvy, workflow-oriented, UX-aware testing specialists with new tools or specialise in automation, technical testing or become business domain experts.

So how do Testers take Advantage of Redistribution?

I set out my top 10 predictions for the next five years in my blog On the Redistribution of Testing and I won’t labour those points here. Rather, I’ll explore some leadership issues that arise from the pressures I mentioned above and potential shifts in the software development and more particularly, testing business.

The core of the redistribution idea is that the checking that occupies much of the time of testing teams (who usually get involved late in projects) can be better done by developers. Relieving the testers of this burden gives them time to get involved earlier and to improve the definition of software before it is built. Our proposal is that testers apply their critical skills to the creation of examples that illustrate the behaviour of software in use in the requirements phase. Examples (we use the term business stories) provide feedback to stakeholders and business analysts to validate business rules defined in requirements. The outcome of this is what we call trusted requirements.

In the Business Story Pocketbook, we define a trusted requirement as “… one that, at this moment in time, is believed to accurately represent the users’ need and is sufficiently detailed to be developed and tested.” Trusted requirements are specified collaboratively with stakeholders, business analysts, developers and testers involved.

Developers, on receipt of validated requirements and business stories can use the stories to drive their TDD approach. Some (if not all) of these automated checks form the bulk of regression tests that are implemented in a Continuous Integration regime. These checks can then be trusted to signal a broken build. As software evolves, requirements change; stories and automated checks change too. This approach, sometimes-called Specification by Example depends on accurate specifications (enforced by test automation) for the lifetime of the software product. Later (and fewer) system testers have reduced time to focus on the more subtle types of problem, end to end and user experience testing.

The deal is this: testers get involved earlier to create scenarios that validate requirements, and that developers can automate. Improving the quality of requirements means the target is more stable, developers produce better code, protected by regression tests. Test teams, relieved of much of the checking and re-testing are smaller and can concentrate on the more subtle aspects of testing.

With regards to the late testing in continuously delivering environments, testers are required to perform some form of ‘health check’ prior to deployment, but the days of teams spending weeks to do this are diminishing fast. We need fewer, much smarter testers working up-front and in the short time between deployment and release.

Where are the Opportunities?

The software development and Agile thought leaders are very forcefully arguing for continuous delivery, collaborative specification, better development practices (TDD, BDD), continuous integration, and testing in production using A/B testing, dark releases and analytics and big data. The stampede towards mobile computing continues apace and for organisations that have a web presence, the strategy is becoming clearer.

The pace of technical change is so high that the old way of testing just won’t cut it. Some teams are discovering they can deliver without testers at all. The challenge of testing is perceived (rightly or wrongly) to be one of speed and cost (even though it’s more subtle than that of course). Testers aren’t being asked to address this challenge because it seems more prone to a technical solution and POFTs are not technical.

But the opportunities are there: to get involved earlier in the requirements phase; to support developers in their testing and automation; to refocus testing away from manual checking towards exploratory testing; to report progress and achievement against business goals and risks, rather than test cases and bug reports.

Testers Need a New Mindset; so do Vendors

We need the testing thought-leaders to step up and describe how testing, if it truly is an information provision service, helps stakeholders and business analysts to create trusted requirements, support developers in creating meaningful, automatable, functional tests. And to be there at the end to perform the testing (in production, or production-like environments) to ensure there are no subtle flaws in the delivered system.

Some of the clichés of testing need to be swept away. The old thinking is no longer relevant and may be career limiting. To change will take some courage, persistence and leadership.

Developers write code; testers test because developers can’t: this mentality has got to go. Testing can no longer be thought of as distinct from development. The vast majority of checking can be implemented and managed by development. One potential role of a tester is to create functional tests for developers to implement. The developers, being fluent in test automation, implement lower level functional and structural tests using the same test automation. Where developers need coaching in test design, then testers should be prepared to provide it.

Testers don’t own testing: testing is part of everyone’s job from stakeholder, to users, to business analysts, developers and operations staff. The role of a tester could be that of ‘Testmaster’. A testmaster provides assurance that testing is done well through test strategy, coaching, mentoring and where appropriate, audit and review.

Testing doesn’t just apply to existing software, at the end: testing is an information provision service. Test activity and design is driven by a project’s need to measure achievement, to explore the capabilities, strengths and weaknesses so decisions can be made. The discipline of test applies to all artefacts of a project, from business plans, goals, risks, requirements and design. We coined the term ‘Project Intelligence’ some years ago to identify the information testers provide.

Testing is about measuring achievement, rather than quality: Testing has much more to say to stakeholders when its output describes achievement against some meaningful goal, than alignment to a fallible, out of date, untrusted document. The Agile community have learnt that demonstrating value is much more powerful than reporting test pass/fails. They haven’t figured how to do it of course, but the pressure to align Agile projects with business goals and risks is very pronounced.

Whither the Test Manager?

You are test manager or a test lead now. Where will you be in five years? In six months? It seems to me there are five broad choices for you to take (other than getting out of testing and IT altogether).
  1. Providing testing and assurance skills to business: moving up the food chain towards your stakeholders, your role could be to provide advice to business leaders wishing to take control of their IT projects. As an independent agent, you understand business concerns and communicate them to projects. You advise and cajole project leadership, review their performance and achievement and interpret outputs and advise your stakeholders.
  2. Managing Requirements knowledge: In this role, you take control of the knowledge required to define and build systems. Your critical skills demand clarity and precision in requirements and the examples that illustrate features in use. You help business and developers to decide when requirements can be trusted to the degree that software can reasonably be built and tested. You manage the requirements and glossary and dictionary of usage of business concepts and data items. You provide a business impact analysis service.
  3. Testmaster – Providing an assurance function to teams, projects and stakeholders: A similar role to 1 above – but for more Agile-oriented environments. You are a specialist test and assurance practitioner that keeps Agile projects honest. You work closely with on-site customers and product owners. You help projects to recognise and react to risk, coach and mentor the team and manage their testing activities and maybe do some testing yourself.
  4. Managing the information flow to/from the CI process: in a Specification by Example environment, if requirements are validated with business stories and these stories are used directly to generate automated tests which are run on a CI environment, the information flows between analysts, developers, testers and the CI system is critical. You define and oversee the processes used to manage the information flow between these key groups and the CI system that provides the control mechanism for change, testing and delivery.
  5. Managing outsourced/offshore teams: In this case, you relinquish your onsite test team and manage the transfer of work to an outsourced or offshore supplier. You are expert in the management of information flow to/from your software and testing suppliers. You manage the relationship with the outsourced test team, monitor their performance and assure the outputs and analyses from them.


The recent history and the current state of the testing business, the pressures that drive the testers out of testing and the pull of testing into development and analysis will force a dramatic re-distribution of test activity in some or perhaps most organisations.

Henry Kissinger said, “A leader does not deserve the name unless he is willing occasionally to stand alone”. You might have to stand alone for a while to get your view across. Dwight D Eisenhower gave this definition: “Leadership is the art of getting someone else to do something you want done because he wants to do it”.

Getting that someone else to want to do it might yet be your biggest challenge.

Tags: #futureoftesting #Leadership


from Paul Gerrard

First published 23/10/2013

This article appeared in the October 2013 Edition of Professional Tester magazine. To fit the magazine format, the article was edited somewhat. The original/full article can be downloaded here.

Big Data

Right now, Big Data is trending. Data is now being captured at an astonishing speed. Any device that has a power supply has some software driving it. If the device is connected to a network or the internet, then the device is probably posting activity logs somewhere. The volumes being captured across organisations are huge – databases of Petabytes (millions of Gigabytes) of data are springing up in large and not so large organisations. Traditional, relational technology simply cannot cope. Mayer-Schonberger and Cukier argue in their book, “Big Data” [1], it’s not that data is huge, it’s that, for all business domains, it seems to be much bigger than we collected before. Big Data can be huge, but the more interesting aspect of Big Data is its lack of structure. The change in the philosophy of Big Data is reflected in three principles.
  1. Traditionally, we have dealt with samples (because the full data set is large), and as a consequence we have tended to focus on relationships that reflected cause and effect. Looking at the entire data set allows us to see details that we never could before.
  2. Using the full data set releases us from the need to be exact. If we are dealing with data points in the tens or hundreds, we focus on precision. If we deal with thousands or millions of data points, we aren’t so obsessed with minor inaccuracies like losing a few records here and there.
  3. We must not be obsessed with causality. If the data tells us there is a correlation between two things we measure, then so be it. We don’t need to analyse the relationship to make use of it. It might be good enough just to know that the number of cups of coffee bought by product owners in the cafeteria correlates inversely with the number of severity 1 incidents in production. (Obviously, I made that correlation up – but you see what I mean). Maybe we should give the POs tea instead?
The interest in Big Data as a means of supporting decision-making is rapidly growing. Larger organisations are creating teams of so-called data scientists to orchestrate the capture of data and analyse it to obtain insights. The phrase ‘from insight to action’ is increasingly used to summarise the need to improve and accelerate business decision-making.

‘From Insight to Action’Activity logs tend to be captured as plain text files with fields delimited by spaces, tabs or commas or as JSON or XML formatted data. This data does not appear validated, structured and integral as it would be in a relational table – it needs filtering, cleaning, enriching as well as storing. New tools designed to deal with such data are becoming available. A new set of data management and analysis disciplines are also emerging. What opportunities are out there for testing? Can the Big Data tools and disciplines be applied to traditional test practices? Will these test practices have to change to make use of Big Data? This article explores how data captured throughout a test and assurance process could be merged and integrated with definition data (requirements and design information) and production monitoring data and analysed in interesting and useful ways.

The original/full article can be downloaded here.

Tags: #continuousdelivery #BigData #TestAnalytics


from Paul Gerrard

First published 05/10/2015

It was inevitable that people would compare my formulation of a New Model for Testing with the James Bach & Michael Bolton demarcation of or distinction between 'testing versus checking'. I've kind of avoided giving an opinion online, although I have had face to face conversations with people who both like and dislike this idea and expressed some opinions privately. The topic came up again last week at StarWest so I thought I'd put the record straight.

I should say that I agree with Cem Kaner's rebuttal of the testing v checking proposal. It is clear to me that although the definition of checking is understandable, the definition of testing is less so, evidenced by the volume of comments on the authors' own blogs. In my discussions with people who support the idea, it was easy to agree on checking as a definition, but testing, as defined, seemed much harder for them to defend in the face of experience.

Part of the argument for highlighting what checking is, is to posit that we cannot rely on checking, particularly with tools, alone. Certainly, brainless use of tools to check – a not infrequent occurrence – is to be decried. But then again, brainless use of anything… But it is just plain wrong that we cannot rely on automated tests. Some products, cannot be tested in any other way. Whether you like it or not – that's just the way it is.

One reason I've been reticent on the subject is I honestly thought people would see through this distinction quickly, and it would be withdrawn or at least refined into something more useful.

Some, probably many, have adopted the checking definition. But few have adopted the testing definition, such as it is, and debated it with any conviction. It looks like I have to break cover.

It would be easy to compare exploration in the New Model with the B & B view of testing and my testing with their view of checking. There are some parallels but comparing them only serves to highlight the differences in perspectives of the authors. We don't think the same, and that's for sure.

From my perspective, perhaps the most prominent argument against the testing v checking split is the notion that somehow testing (if it is simply their label for what I call exploration) and checking are alternatives. The sidelining of checking as something less valuable, intellectual or effective doesn't match experience. The New Model reflects this in that the tester explores sources of information to create models that inform testing. Whether these tests are in fact checks is important, but the choice of scripting as a means of recording a test for use in execution (by tools or people) is one of logistics – it is, dare I say, context-specific.

The exploration comes before the test. If you do not understand what the system should (or should not) do, you cannot formulate a meaningful test. You can enquire what a system might do, but who is to say whether that behaviour is correct or otherwise, without some input from a source of knowledge other than the system itself. The SUT cannot be its own test oracle. The challenges you apply during exploration of the SUT are not tests – they are trials of your understanding (your model) of the actual behaviour of the SUT.

Now, in most situations, it is extremely hard to trust a requirement, however stated, is complete, correct, unambiguous – perfect. In this way, one might never comfortably decide the time is right for testing or checking (as I have scoped it). The New Model implies one should persevere to improve the requirements/sources and aligned with your mental model before you commit to (coding or) testing. Of course, one has to make that transition sometime and that's where the judgement comes in. Who can say what that judgement should be except that it is a personal, consensual or company-cultural decision to proceed.

Exploration is a dynamic activity, whereby you do not usually have a fully formed view of what the system should do, so you have to think, model and try things based on the model as it stands. Your current model enables you to make predictions on behaviour and to try these predictions on the SUT or stakeholders or against any other source of knowledge that is germane to the challenge at hand.

Now, I fully appreciate our sources of knowledge are fallible. This is part and parcel of the software development process and why there are feedback loops in (my version of) exploration. But I argue that the exploration part of the test process (enquiring, modelling, predicting and challenging) are the same for developers as they are for testers.

The critical step in transitioning from exploration to testing, or in the case of a developer, to writing the code based on their understanding (synonymous with the 'model' in their head) is where the developer or tester believes they understand the need and trust their model or understanding. Until that moment, they remain in the exploration state, are uncertain to some degree and are not yet confident (if that is the right term) that they could decide whether a system behaviour is correct or not or just 'interesting'.

If a developer or tester proceeds to coding or testing before they trust their model, then it's likely the wrong thing will be built or tested or it will be tested badly.

Now just to take it further, a tester would not raise a defect report while they are uncertain of the required behaviour of the SUT. Only when they are confident enough to test would it be reasonable to do so. If you are not in a position to say 'this works correctly according to my understanding of the requirement (however specified)' you are not testing, you are exploring your sources of information or the SUT.

In the discussion above, the New Model appears to align with this rather uncertain process called exploration.

Now, let's return to the subject of testing versus checking. Versus is the wrong word, I am sure. Testing and checking are not opposed or alternatives. Some tests can be scripted in some way, for the use of people or automated tools. In order to reach the point in one's understanding to flip from exploration to testing, you have to have done the ground work. In many ways, it takes more effort, thinking, modelling to reach the requisite understanding to construct a script or procedure to execute a check than to just learn what a system does through exploration, valuable though that process is.

As an aside, it's very hard to delineate where scripted and unscripted testing begin and end anyway. If I say, 'test X like so, and keep your eyes open' is that a script or an exploratory charter?

In no way can checking be regarded as less sophisticated, useful, effective or requiring less effort than testing without a script. The comparison is spurious as for example, some systems can be tested in no other way than with scripted tooling. In describing software products, Philip Armour (in his book, 'the Laws of Software Process') says that software is not a product, but rather 'the product is the knowledge contained in the software'. Software is not a product, it is a medium.

The only way a human can test this knowledge product is through the layers of hardware (and software) that must be utilised in very specific ways. In almost every respect, testing is dependent on some automated support. So, as Cem says, at some level, 'all tests are automated... all tests are manual'.

Since the vast majority of software has no user interface, it can only ever be checked using tools. (Is testing in the B & B world only appropriate to a user interface then?) On the other hand, some user interfaces cannot be automated in any viable way (often because of the automated technology between human and the knowledge product). That's life, not a philosophical distinction.

The case can be made that people following scripts by rote might be blinkered and miss certain aspects of incorrect behaviour. This is certainly the case, especially if people are asked to follow scripts blindly. In all my experience of testing in thirty years no tester has ever been asked to be so blinkered. In fact, the opposite is often true, and testers are routinely briefed to look out for anything anomalous precisely to address the risk of oversight. Of course, humans make mistakes and oversight is an inevitability. However, it could also be said that working to a script makes the tester more eagle-eyed – the formality of scripted testing, possibly witnessed (akin to pairing, in fact) is a serious business.

On the other hand, people who have been asked to test without scripts, might be unfocused, lazy almost, unthinking and sloppy. They are hard to hold to account as a list of features or scenarios to cover in some way in a charter, without thinking or modelling is unsafe.

What is a charter anyway? It's a high level script. It might not specify test steps, but more significantly it usually defines scope. An oversight in a scripted test might let a bug through. An oversight in a charter might let a whole feature go untested.

Enough. The point is this. It is meaningless and perverse to compare undisciplined, unskilled scripted or unscripted testing with its skilled, disciplined counterpart. We should be paying attention to the skills of the testers we employ to do the job. A good tester, investing the effort on the left hand side of the New Model will succeed whether they script or not. For that reason alone, we should treat the scripted/unscripted dichotomy as a matter of logistics, and ignore it from the perspective of looking at testers skills.

We should be thankful that, depending on your project, some or all testing can be scripted/automated, and leave it at that.

Tags: #testingvchecking #NewModelTesting


from Paul Gerrard

First published 17/06/2016

A further response to the debate on here https://www.linkedin.com/groups/690977/690977-6145951933501882370?trk=hp-feed-group-discussion. I couldn't fit it in a comment so put it here instead.

Hi Alan. Thanks - I'm not sure we are disagreeing, I think we're debating from different perspectives, that's all.

Your suggestion that other members of our software teams might need to re-skill or up-skill isn't so far-fetched. This kind of re-assignment and re-skilling happens all the time. If a company needs you to reskill because they've in/out/right-sourced, or merged with another company, acquired a company or been acquired - then so be it. You can argue from principle or preference, but your employer is likely to say comply or get another job. That's employment for you. Sometimes it sucks. (That's one reason I haven't had a permanent job since 1984).
My different perspective? Well i'm abolutely not arguing from the high altar of thought-leadership, demagoguery or dictatorship. Others can do that and you know where they can stick their edicts.

Almost all the folk I meet in the testing services business are saying Digital is dominating the market right now. Most organisations are still learning how to do this and seek assistance from wherever they can get it. Services business may be winging it but eventually the dust will settle, they and their clients will know what they are doing. The job market will re-align to satisfy this demand for skills. It was the same story with client/server, internet, Agile, mobile and every new approach - whatever. It's always the same with hype - some of it becomes our reality.

(I'm actually on a train writing this - I'm on my way to meet a 'Head of Digital' who has a testing problem, or perhaps 'a problem with our testers'. If I can, I'll share my findings...)

Not every company adopts the latest craze, but many will. Agile (with a small a), continuous delivery, DevOps, containerisation, micro-services, shift-left, -right or whatever are flavour of the month (althoough agile IMHO has peaked). A part of this transition or transformation is a need for more technical testers - that's all. The pressure to learn code is not coming from self-styled experts. It is coming from the job market which is changing rapidly. It is mandatory, only if you want to work in these or related environments.

The main argument for understanding code is not to write code. Code-comprehension, as i would call it, is helpful if your job is to collaborate more closely with developers using their language, that's all.

Tags: #Testers #learntocode