Paul Gerrard

My experiences in the Test Engineering business; opinions, definitions and occasional polemics. Many have been rewritten to restore their original content.

First published 08/08/2014

The nice folk at Testing Cup in Poland have posted a video of my keynote on You tube.

I originally gave a EuroSTAR keynote in 2002 titled 'What is the Value of Testing and how can we improve it?'. This talk brings the topic up to date but also introduces the New Model Testing that I'm working on at the moment. 

I've written a  paper that describes New Model Testing that can be downloaded here.



Tags: #valueoftesting #NewModelTesting #TestingCup

Paul Gerrard My linkedin profile is here My Mastodon Account

First published 05/07/2018

Do you remember the 'Testing is Dead' meme that kicked off in 2011 or so? It was triggered by a presentation done by Alberto Savoiea here . It caused quite a stir, some copycat presentations and a lot of counter-arguments. But I always felt most people missed the point being made. you just had to strip out the dramatics and Doors music.

The real message was that for some organisations, the old ways wouldn't work any more, and as time has passed, that prediction has come true. With the advent of Digital, mobile, IoT, analytics, machine learning and artificial intelligence, some organisations are changing the way they develop software, and as a consequence, testing changes too.

Shifting testing left, with testers working more collaboratively with the business and developers, test teams are being disbanded and/or distributed across teams. With no test team to manage, the role of the test manager is affected. Or eliminated.

Test management thrives; test managers come and go.
It is helpful to think of testing as less of a role and more of an activity that people undertake in their projects or organisations. Everyone tests, but some people specialise and make a career of it. In the same way, test management is an activity associated with testing. Whether you are the tester in a team or running all the testing in a 10,000 man-year programme, you have test management activities.

For better or for worse, many companies have decided that the role of test managers is no longer required. Responsibility for testing in a larger project or programme is distributed to smaller, Agile teams. There might be only one tester in the team. The developers in the team take more responsibility for testing and run their own unit tests. There’s no need for a test manager as such – there is no test team. But many of the activities of test management still need to be done. It might be as mundane as keeping good records of tests planned and/or executed. It could be taking the overall project view on test coverage (of developer v tester v user acceptance testing for example).

There might not be a dedicated test manager, but some critical test management activities need to be performed. Perhaps the team jointly fulfil the role of a virtual test manager!

Historically, the testing certification schemes have focused attention on the processes you need to follow—usually in structured or waterfall projects. There’s a lot of attention given to formality and documentation as a result (and the test management schemes follow the same pattern). The processes you follow, the test techniques you use, the content and structure of reporting vary wherever you work. I call these things logistics.

Logistics are important, but vary in every situation.
In my thinking about testing, as far as possible, I try to be context-neutral. (Except my stories, which are grounded in real experience).

As a consultant to projects and companies, I never knew what situation would underpin my next assignment. Every organisation, project, business domain, company culture, and technology stack is different. As a consequence, I avoided having fixed views on how things should be done, but over twenty-five years of strategy consulting, test management and testing, certain patterns and some guiding principles emerged. I have written about these before[1].

To the point.

Simon Knight at Gurock asked me to create a series of articles on Test Management, but with a difference. Essentially, the fourteen articles describe what I call “Logistics-Free Test Management”. To some people that's an oxymoron. But that is only because we have become accustomed in many places to treat test management as logistics management. Logistics aren't unique to testing.

Logistics are important, but they don't define test management.
I believe we need to  think about testing as a discipline where logistics choices are made in parallel with the testing thinking. Test Management follows the same pattern. Logistics are important, but they aren't testing. Test management aims to support the choices, sources of knowledge, test thinking and decision making separately from the practicalities – the logistics – of documentation, test process, environments and technologies used.

I derived the idea of a New Model for Testing – a way of visualising the thought processes of testers – in 2014 or so. Since then, I have presented to thousands of testers and developers and I get very few objections. Honestly!

However, some people do say, with commitment, “that's not new!”. And indeed it isn't.

If the New Model reflects how you think, then it should be a comfortable fit. It is definitely not new to you!
One of the first talks I gave on the New Model is here. (Skip to 43m 50s to skip the value of testing talk and long introduction).

[caption id=“attachment_1068” align=“aligncenter” width=“525”] The New Model for Testing[/caption]

Now, I might get a book out of the material (hard-copy and/or ebook formats), but more importantly, I'm looking to create an online and classroom course to share my thinking and guidance on test management.

Rather than offer you specific behaviours and templates to apply, I will try to describe the goals, motivations, thought processes, the sources of knowledge and the principles of application and use stories from my own experience to illustrate them. There will also be suggestions for further study and things to think about as exercises or homework.

You will need to adjust these lessons to your specific situation. It requires that you think for yourself – and that is no bad thing.
Here’s the deal in a nutshell: I’ll give you some interesting questions to ask. You need to get the answers from your own customers, suppliers and colleagues and decide what to do next.

I'll be exploring these ideas in my session at the next Assurance Leadership Forum on 25 July. See the programme here and book a place.

In the meantime, if you want to know more, leave a comment or do get in touch at my usual email address.

[1] The Axioms of Testing in the Tester’s Pocketbook for example, https://testaxioms.com

Tags: #testassurance #testmanager #NewModelTesting #ALF #TestManagement #post-format-gallery

Paul Gerrard My linkedin profile is here My Mastodon Account

First published 30/04/2013

Huib Schoots asked me to name my top 10 books for testers. Initially, I was reluctant because I'm no fan of beauty parades and if an overall 'Top 10 Books' were derived from voting:

 

  1. The same old same old books would probably top the chart again, and
  2. Any obscure books I nominated would be lost in the overall popularity contest.

 

So what's the point?

 

Nassim Taleb tweeted, just the other day: "You should never allow knowledge to turn into a spectator sport, by condoning rankings, awards, "hotness" &"incentives"" (3rd ethical rule)

— Nassim N. Taleb (@nntaleb) April 28, 2013

Having said that, and with reservations, I sent Huib a list. What books would I recommend to testers? I’m still reluctant to recommend books because it is a personal choice. I can’t say whether anyone else would find these useful or not. But, scanning the bookshelf in my study, I have chosen books that I like and think are *not* likely to be on a tester’s shelf. These are not in any order. My comments are subjective and personal.

  1. Zen and the Art of Motorcycle Maintenance, Robert M Pirsig – Quality is for philosophers, not systems professionals. I try not to use the Q word.
  2. Software Testing Techniques, Boris Beizer - for the stories and thinking, more than the techniques stuff (although that’s not bad).
  3. The Complete Plain Words, Sir Ernest Gower – Use this or your language/locale equivalent to improve your writing skills.
  4. Test-Driven Development, Kent Beck – is TDD about test or design? Does it matter? Testing is about *much* more than finding bugs.
  5. The Tester’s Pocketbook, Paul Gerrard – I wrote this as an antidote to the ‘everything is heuristic’ idea, which I believe is limiting and unambitious.
  6. Systems Thinking, Systems Practice, Peter Checkland – an antidote to the “General Systems Theory” nonsense.
  7. The Tyranny of Numbers, Why Counting Can’t Make Us Happy, David Boyle – The title says it all. Entertaining discussion of targets, metrics and misuse.
  8. Straight and Crooked thinking, Robert H Thouless – Title says it all again. Thouless wrote another book 'Straight and Crooked Thinking in Wartime' 70-odd years ago - a classic.
  9. Logic: an introduction to elementary logic, Wilfred Hodges – Buy this (or a similar) book on predicate logic: few understand what it is, but it’s a cornerstone of critical thinking.
  10. Lectures on Physics, Richard P Feynman – It doesn’t matter whether you understand the Physics, it’s the thought processes he exposes that matters. RPF doesn't do bullsh*t.

11. Game Theory and Economic Behaviour, John von Neuman, Oskar Morgenstern – OK it’s no 11. Don’t read this. Be aware of it. We have the mathematics for when we finally collect data that people can *actually* use to make decisions.

There are many non-testing books that didn't make my list. Near misses include at least one scripting/programming language book, all of Edward Tufte's books, Gause & Weinberg's "Exploring Requirements", DeMarco and Lister's "Peopleware", DeMarco's "Controlling Software Projects" and "Structured Analysis", Nancy Leveson's "Safeware", Constantine and Lockwood's "Software for Use" and lots more. Ten books isn't enough. Perhaps one hundred isn't enough either.

Huib asked me to explain my "General Systems Theory" nonsense comment. OK, nonsense may be too strong. Maybe I should say non-science?

The Systems movement has existed for 70-80+ years. Dissatisfaction with General Systems Theory in 60s/70s caused a split between the theorists and those who wanted to evolve System Thinking into a problem solving approach. Maybe you could call them two ‘schools’? GST was attacked very publicly by the problem-solving movement (the Thinkers) who went down a different path. GST never delivered a ‘grand theory’ and appeared to exist only to publish a journal. The thinker movement proceeded to work in areas covering hard- and soft-systems and decision making support. It's clear that the schism between the two streams caused (or was caused by) antagonism. It is reminiscent of the testing schools 'debate'. A typical criticism in the 1970s was one expert (Naughton) chided another expert (Lilienfield (who attacked GST)) by saying “there was nothing approaching a coherent body of tested knowledge to attack, rather a melange of insights, theorems, tautologies and hunches…”. The problem that the 'Thinkers' had with GST is that most of the insights and theorems are unsubstantiated and not used for anything. GST drifts into the mystical sometimes - it’s almost like astrology. Checkland says: “The problem with GST is it pays for its generality with lack of content” and "... the fact that GST has failed in its application does not mean that systems thinking has failed". The Systems Thinking movement, being focused on problem solving inspired most of the useful (and used) approaches and methods. It’s less ‘pure’ but by being grounded in problem solving, it has more value. Google "General Systems Theory" and you get few hits of relevance and only one or two books written in the last 25 years. Try "Systems Thinking" and you get lots (although there's a certain amount of bandwagon jumping perhaps). Seems to me that GST has been left behind. I don't like GST because it is so vague. It is impossible to prove or disprove the theories, so being a sceptical sort, mostly it says nothing useful that I can trust.

When I say systems Thinking I mean Systems Thinking and Soft Systems Methodology and not General Systems Research and Systems Inquiry

— Paul Gerrard (@paul_gerrard) December 22, 2012

 



Tags: #Top10Books

Paul Gerrard My linkedin profile is here My Mastodon Account

First published 28/09/2016

Last week in Dublin and Cork, I gave talks on testing the Internet of Things (or everything, depending on who you consult). I want to share a little idea or comment that I made during the Cork session. It seemed to come from nowhere – I certainly hadn't prepped the comment so it might have come from somewhere deep in my subconscious. But if you know the source and it's someone other than me, please let me know and I'll acknowledge them.

You never know with live presentations. I don't rehearse (ever) so I never really know what is going to come out. To me, presentations are really stories, so I know the jist of the story, but it's different every time I tell it. My talks are always kid of  ”live” so to speak, and with the adrenalin flowing, I tend to go into autopilot. Sometimes random (but good) thoughts come out in the flow. From nowhere.

Anyway. What I said was...

“In projects, we tend to let the pilot of a plane – the developer – take off with a general heading and let them go. The navigator – the tester – waits at the other end, and after a rather long wait says 'you were 65 miles off target, and you crashed into a mountain'.

Why on earth do we leave the navigator off the plane? Why do we separate testers from developers? For heaven's sake, why don't we put the navigator on the plane with the pilot in the first place?”

This is the best illustration I can think of for ”test early, test often” (a mantra some of us have preached for 20+ years). I've been calling testers “navigators” for ages, but the new idea was suggesting, by putting testing 'after' development, it's like asking the navigator to wait at the destination.

So, if you are trying to argue for “test early, test often”, “shift-left” or “testers embedded with developers” you might use my “waiting for a plane” analogy.

Of course, you might have heard this idea somewhere else (and maybe I did too, but forgot). Do let me know – I'd like to thank them.

Tags: #TesterasNavigator #Shift-Left

Paul Gerrard My linkedin profile is here My Mastodon Account

First published 23/10/2013

This article appeared in the October 2013 Edition of Professional Tester magazine. To fit the magazine format, the article was edited somewhat. The original/full article can be downloaded here.

Big Data

Right now, Big Data is trending. Data is now being captured at an astonishing speed. Any device that has a power supply has some software driving it. If the device is connected to a network or the internet, then the device is probably posting activity logs somewhere. The volumes being captured across organisations are huge – databases of Petabytes (millions of Gigabytes) of data are springing up in large and not so large organisations. Traditional, relational technology simply cannot cope. Mayer-Schonberger and Cukier argue in their book, “Big Data” [1], it’s not that data is huge, it’s that, for all business domains, it seems to be much bigger than we collected before. Big Data can be huge, but the more interesting aspect of Big Data is its lack of structure. The change in the philosophy of Big Data is reflected in three principles.
  1. Traditionally, we have dealt with samples (because the full data set is large), and as a consequence we have tended to focus on relationships that reflected cause and effect. Looking at the entire data set allows us to see details that we never could before.
  2. Using the full data set releases us from the need to be exact. If we are dealing with data points in the tens or hundreds, we focus on precision. If we deal with thousands or millions of data points, we aren’t so obsessed with minor inaccuracies like losing a few records here and there.
  3. We must not be obsessed with causality. If the data tells us there is a correlation between two things we measure, then so be it. We don’t need to analyse the relationship to make use of it. It might be good enough just to know that the number of cups of coffee bought by product owners in the cafeteria correlates inversely with the number of severity 1 incidents in production. (Obviously, I made that correlation up – but you see what I mean). Maybe we should give the POs tea instead?
The interest in Big Data as a means of supporting decision-making is rapidly growing. Larger organisations are creating teams of so-called data scientists to orchestrate the capture of data and analyse it to obtain insights. The phrase ‘from insight to action’ is increasingly used to summarise the need to improve and accelerate business decision-making.

‘From Insight to Action’Activity logs tend to be captured as plain text files with fields delimited by spaces, tabs or commas or as JSON or XML formatted data. This data does not appear validated, structured and integral as it would be in a relational table – it needs filtering, cleaning, enriching as well as storing. New tools designed to deal with such data are becoming available. A new set of data management and analysis disciplines are also emerging. What opportunities are out there for testing? Can the Big Data tools and disciplines be applied to traditional test practices? Will these test practices have to change to make use of Big Data? This article explores how data captured throughout a test and assurance process could be merged and integrated with definition data (requirements and design information) and production monitoring data and analysed in interesting and useful ways.

The original/full article can be downloaded here.

Tags: #continuousdelivery #BigData #TestAnalytics

Paul Gerrard My linkedin profile is here My Mastodon Account

First published 17/06/2016

A further response to the debate on here https://www.linkedin.com/groups/690977/690977-6145951933501882370?trk=hp-feed-group-discussion. I couldn't fit it in a comment so put it here instead.

Hi Alan. Thanks - I'm not sure we are disagreeing, I think we're debating from different perspectives, that's all.

Your suggestion that other members of our software teams might need to re-skill or up-skill isn't so far-fetched. This kind of re-assignment and re-skilling happens all the time. If a company needs you to reskill because they've in/out/right-sourced, or merged with another company, acquired a company or been acquired - then so be it. You can argue from principle or preference, but your employer is likely to say comply or get another job. That's employment for you. Sometimes it sucks. (That's one reason I haven't had a permanent job since 1984).
 
My different perspective? Well i'm abolutely not arguing from the high altar of thought-leadership, demagoguery or dictatorship. Others can do that and you know where they can stick their edicts.

Almost all the folk I meet in the testing services business are saying Digital is dominating the market right now. Most organisations are still learning how to do this and seek assistance from wherever they can get it. Services business may be winging it but eventually the dust will settle, they and their clients will know what they are doing. The job market will re-align to satisfy this demand for skills. It was the same story with client/server, internet, Agile, mobile and every new approach - whatever. It's always the same with hype - some of it becomes our reality.

(I'm actually on a train writing this - I'm on my way to meet a 'Head of Digital' who has a testing problem, or perhaps 'a problem with our testers'. If I can, I'll share my findings...)

Not every company adopts the latest craze, but many will. Agile (with a small a), continuous delivery, DevOps, containerisation, micro-services, shift-left, -right or whatever are flavour of the month (althoough agile IMHO has peaked). A part of this transition or transformation is a need for more technical testers - that's all. The pressure to learn code is not coming from self-styled experts. It is coming from the job market which is changing rapidly. It is mandatory, only if you want to work in these or related environments.

The main argument for understanding code is not to write code. Code-comprehension, as i would call it, is helpful if your job is to collaborate more closely with developers using their language, that's all.



Tags: #Testers #learntocode

Paul Gerrard My linkedin profile is here My Mastodon Account

First published 22/12/2015

This article originally appeared on the Ministry of Testing website on 16 Feb 2014 here; http://www.ministryoftesting.com/2014/02/testers-coding-debate-can-move-now/ I have reproduced it unchanged on my own blog as part of a consolidation of all my articles into this place.

Should Testers Learn How to Write Code?

The debate on whether testers should learn how to write code has ebbed and flowed. There have been many blogs on the subject both recent and not so recent. I have selected the ten most prominent examples and listed them below. I could have chosen twenty or more. I encourage you to read references [1, 2, 3, 4, 5, 6, 7, 8, 9, 10].

At the BCS SIGIST in London on the 5th December 2013, a panel discussion was staged on the topic “Should software testers be able to code?” The panellists were: Stuart Reid, Alan Richardson, Dot Graham and myself. Dot recorded the session and has very kindly transcribed the text of the debate. I have edited my contributions to make more sense than I appear to have made ‘live’. (I don’t know if the other contributors will refine their content and a useful record will emerge). Alan Richardson has captured some pre- and post-session thoughts here – “SIGIST 2013 Panel – Should testers be able to code? [11]. I have used some sections of the comments I made at the session in this article.

It’s easy to find thoughtful comments on the subject of testers and coding skills. But why are smart people still writing about the subject? Hasn’t this issue been resolved yet?  There’s a certain amount of hand-wringing and polarisation in the discussion. For example, one argument goes, if you learn how to code, then either:

a)    You are not, by definition, a tester anymore; you are a programmer and

b)    By learning how to code, you may go native, lose your independence and become a less effective tester.

Another perfectly reasonable view is that you can be a very effective tester without knowing how to code if your perspective is black-box or functional testing only.

I’d like to explore in this article how I think the situation is obviously not black-and-white. It’s what you do, not what you know, that frames your role but also that adding one skill to your skills-set does not reduce the value of another. I’d like to move away from the ‘should I, shouldn’t I’ debate and explore how you might acquire capabilities that are more useful for you personally or your team – if your team need those capabilities.

The demand for coding skills is driven by the demand for capabilities in your project. In a separate article I’ll be proposing a ‘road-map’ for tester capabilities that require varying programming kills.

My Contribution to the ‘Debate’

Before we go any further, let me make a few position statements derived from the Q&A of the SIGIST debate. By the way, when the SiGIST audience were asked, it appeared that more than half confirmed that they had programming skills/experience.

Software testers should know about software, but don’t usually need to be an expert

Business acceptance testers need to know something of the business that the system under test will support. A system tester needs to know something about systems, and systems thinking. Software testers ought to know something about software, shouldn’t they? Should a tester know how to write code? If they are looking at code figuring ways to test it, then probably. And if they need to write code of their own or they are in day to day contact with developers helping them to test their code then technical skills are required. But what a tester needs to know depends on the conversations they need to have with developers.

Code comprehension (reading, understanding code) might be all that is required to take part in a technical discussion. Some programming skills, but not necessarily at a ‘professional programmer level’, are required to create unit tests, services or GUI test automation, test data generation, output scanning, searching and filtering and so on. The level of skill required varies with the task in hand.

New skills only add, they can’t subtract

There is some resistance to learning a programming language from some testers. But having skills can’t do you any harm. Having them is better than not having them; new skills only add, they don’t subtract.

Should testers be compelled to learn coding skills?

Most of us live in free countries, so if your employer insists and you refuse, then you can find a job elsewhere. But is it reasonable to compel people to learn new skills? It seems to me that if your employer decides to adopt new working practices, you can resist the change on the basis of principle or conscience or whatever, but if your company wishes to embed code-savvy testers in the development teams it really is their call. You can either be part of that change or not. If you have the skills, you become more useful in your team and more flexible too of course.

How easy is it to learn to code? When is the best time to learn?

Having any useful skill earlier is better than later of course, but there’s no reason why a dyed-in-the-wool non-techy can’t learn how to code. I suppose it’s harder to learn anything new the older you are, but if you have an open mind, like problem-solving, precise thinking, are a bit of a pedant and have patience – it’s just a matter of motivation.

However, there are people who simply do not like programming or find it too hard or uncomfortable to think the way a programmer needs to think. Some just don’t have the patience to work this way. It doesn’t suit everyone. The goal is not usually to become a full time programmer, so maybe you have to persist. But ultimately, it’s your call whether you take this path.

How competent at coding should testers be?

My thesis is that all testers could benefit from some programming knowledge, but you don’t need to be as ‘good’ a programmer as a professional developer in order to add value. It depends of course, but if you have to deal with developers and their code, it must be helpful to be able to read and understand their code. Code comprehension is a less ambitious goal than programming. The level of skill varies with the task in hand. There is a range of technical capabilities that testers are being asked for these days, but these do not usually require you to be professional programmer.

Does knowing how to code make you a better tester?

I would like to turn that around and say, is it a bad thing to know how to write code if you’re a tester? I can’t see a downside. Now you could argue: if you learn to write code, then you’re infected with the same disease that the programmers have – they are blind to their own mistakes. But testers are blind to their own mistakes too. This is a human failing, not one that is unique to developers.

Let’s take a different perspective: If you are exploring some feature, then having some level of code knowledge could help you to think more deeply about the possible modes (the risks) of failure in software and there’s value in that. You might make the same assumptions, and be blind to some assumptions that the developer made, but you are also more likely to build better mental models and create more insightful tests.

Are we not losing the tester as a kind of proxy of the user?

If you push a tester to be more like a programmer, won’t they then think like a programmer, making the same assumptions, and stop thinking of or like the end user?

Dot Graham suggested at the SiGIST event, “The reason to separate them (testers) was to get an independent view, to find the things that other people missed. One of the presentations at EuroSTAR (2013) was a guy from an agile team who found that all of the testers had ‘gone native’ and were no longer finding bugs important to users. They had to find a way to get independence back.”

On the other hand, by separating the testers, the team lose much of the rapid feedback which is probably more important than ‘independence’. Independence is important, but you don’t need to be in a separate team (with a bureaucratic process) to have an independent mindset – which is what really matters. The independence, wherever the tester is based, is their independent mind whether it’s at the end or working with the developer before they write the code.

There is a Homer Simpson quote [12]: “How is education supposed to make me feel smarter? Besides, every time I learn something new, it pushes some old stuff out of my brain. Remember when I took that home winemaking course, and I forgot how to drive?”

I don’t think that if you learn how to code, you lose your perspective as a subject matter expert or experience as a real user, although I suppose there is a risk of that if you are a cartoon character. There is a risk of going native if, for example, you are a tester embedded with developers. By the same token, there is a risk that by being separated from developers you don’t treat them as members of the same team, you think of them as incompetent, as the enemy. A professional attitude and awareness of biases are the best defences here.

Why did we ever separate testers from developers? Suppose that today, your testers were embedded and you had to make a case that the testers should be extracted into a separated team. I’m not sure the case for ‘independence’ is so easily made because siloed teams are being discredited and discouraged in most organisations nowadays.

What is this shift-left thing?

There seem to be a growing number of companies who are reducing their dependency on scripted testing. The dependency on exploratory testers and of testers ‘shifting left’ is increasing.

Right now, a lot of companies are pushing forward with shift-left, Behaviour-Driven Development, Acceptance Test-Driven Development or Test-Driven Development. In all cases, someone needs to articulate the examples – the checks – that drive these processes. Who will write them, if not the tester? With ATDD, BDD approaches, communication is supported with stories, and these stories are used to generate automated checks using tools.

Companies are looking to embed testers into development teams to give the developers a jump start to do a better job (of development andtesting). An emerging pattern is that companies are saying, “The way we’ll go Agile is to adopt TDD or BDD, and get our developers to do better testing. Obviously, the developers need some testing support, so we’ll need to embed some of our system testers in those teams. These testers need to get more technical.”

One goal is to reduce the number of functional system testers. There is a move to do this – not driven by testers – but by development managers and accountants. Testers who can’t do anything but script tests, follow scripts and log incidents – the plain old functional testers – are being offshored, outsourced, or squeezed out completely and the shift-left approach supports that goal.

How many testers are doing BDD, ATDD or TDD?

About a third of the SIGIST audience (of around 80) raised their hands when asked this. That seems to be the pattern at the moment. Some companies practicing these approaches have never had dedicated independent testers so the proportion of companies adopting these practices may be higher.

Shouldn’t developers test their own code?

Glen Myers’ book [12] makes the statement, “A programmer should avoid attempting to test his or her own program”. We may have depended on that ‘principle’ too strongly, and built an industry on it, it seems. There are far too many testers who do bureaucratic paperwork shuffling – writing stuff down, creating scripts that are inaccurate and out of date, processing incidents that add little value etc. The industry is somewhat bloated and budget-holders see them as an easy target for savings. Shift-left is a reassessment and realignment of responsibility for testing.

Developers can and must test their own code. But that is not ALL the testing that is done, of course.

Do testers need to re-skill?

Having technical skills means that you can become a more sophisticated tester. We have an opportunity, on the technical side, working more closely – pairing even – with developers. (Although we should also look further upstream for opportunities to work more closely with business analysts).

Testers have much to offer to their teams. We know that siloed teams don’t work very well and Agile has reminded us that collaboration and rapid feedback drive progress in software teams. But who provides this feedback? Mostly the testers. We have the right skills and they are in demand. So although the door might be closing on ‘plain old functional testers’ the window is open and opportunities emerging to do really exciting things. We need to be willing to take a chance.

We’re talking about testers learning to code but what about developers learning to test better? Should organizations look at this?

Alan Richardson: We need to look at reality and listen to people on the ground. Developers can test better, business analysts can test better – the entire process can be improved. We’re discussing testers because this is a testing conference. I don’t know if other conferences are discussing these things, but developers are certainly getting better at testing, although they argue about different ways of doing it. I would encourage you to read some of the modern development books like “Growing Object-Oriented Software Guided by Tests” [14] or Kent Beck [15]. That’s how developers are starting to think about testing, and this has important lessons for us as well.

There is no question that testers need to understand how test-driven approaches (BDD, TDD in particular) are changing the way developers think about testing. The test strategy for a system and testers in general must take account (and advantage) of these approaches.

Summary

In this article, I have suggested that:
  • Tester programming skills are helpful in some situations and having those skills would make a tester more productive
  • It doesn’t make sense to mandate these skills unless your organization is moving to a new way of working, e.g. shift-left
  • Tester programming skills rarely need to be as comprehensive as a professional programmer’s
  • A tester-programming training syllabus should map to required capabilities and include code-design and automated checking methods.
We should move on from the ‘debate’ and start thinking more seriously about appropriate development approaches for testers who need and want more technical capabilities.

References

  1. Do Testers Have to Write Code?, Elizabeth Hendrickson, http://testobsessed.com/2010/10/testers-code/
  2. Cem Kaner, comments on blog abovehttp://testobsessed.com/2010/10/testers-code/comment-page-1/#comment-716
  3. Alister Scott, Do software testers need technical skills?,http://watirmelon.com/2013/02/23/do-software-testers-need-technical-skills/
  4. Markus Gartner, Learn how to program in 21 days or so,http://www.associationforsoftwaretesting.org/2014/01/23/learn-how-to-program-in-21-days-or-so/
  5. Schmuel Gerson, Should/Need Testers know how to Program,http://testing.gershon.info/201003/testers-know-how-to-program/
  6. Alan Page, Tear Down the Wall, http://angryweasel.com/blog/?p=624, Exploring Testing and Programming, http://angryweasel.com/blog/?p=613,
  7. Alessandra Moreira, Should Testers Learn to Code?http://roadlesstested.com/2013/02/11/the-controversy-of-becoming-a-tester-developer/
  8. Rob Lambert, Tester’s need to learn to code,http://thesocialtester.co.uk/testers-need-to-learn-to-code/
  9. Rahul Verma, Should the Testers Learn Programming?,http://www.testingperspective.com/?p=46
  10. Michael Bolton, At least three good reasons for testers to learn how to program, http://www.developsense.com/blog/2011/09/at-least-three-good-reasons-for-testers-to-learn-to-program/
  11. Alan Richardson, SIGIST 2013 Panel – Should Testers Be Able to Code, http://blog.eviltester.com/2013/12/sigist-2013-panel-should-testers-be.html
  12. 50 Funniest Homer Simpson Quotes,http://www.2spare.com/item_61333.aspx
  13. Glenford J Myers, The Art of Software Testing
  14. Steve Freeman and Nat Pryce , Growing Object-Oriented Software Guided by Tests, http://www.growing-object-oriented-software.com/
  15. Kent Beck, Test-Driven Development by Example
  16. A Survey of Literature on the Teaching of Introductory Programming, Arnold Pears et al.,http://www.seas.upenn.edu/~eas285/Readings/Pears_SurveyTeachingIntroProgramming.pdf


Tags: #Coding #TestersLearningCode #TechnicalTesters #Programming

Paul Gerrard My linkedin profile is here My Mastodon Account

First published 04/09/2014

In advance of the keynote I'm giving at the BCS SIGIST meeting today, I have uploaded two recent papers that were written for the Tea Time wth Testers magazine.

There are two articles in the series so far:

1. The Internet of Everything  – What is it and how will it affect you?
 
In this article series I want to explore what the IoT and IoE are and what we need to start thinking about. I’ll approach this from the point of view of a society that embraces the technology. Then I will look more closely at the risks we face and finally how we as the IT community in general and the testing community in particular should respond.
 
In the first article of the series I will look at what IoE is and how it affects us all. It’s important you get a perspective for what the IoE so you get a sense of the scale, the variety, the ubiquity, complexity and challenge of the technological wave that many people believe will dominate our industry for the next ten to twenty years.
 
Let me start the whole series off with what sounds a bit like science fiction, but will soon be science fact. John Smith and his family are an invention.
 
 

2. ​Internet of Everything  – Architecture and Risks

Right now, it is a very confusing state of affairs, but clearly, an awful lot of effort and money is being invested in defining the standards and building business opportunities from the promise of the new Internet world order. Standards are on the way, but today, most applications are what you might call bleeding edge, speculative or exploratory. It looks like standards will reveal themselves when successful products and architectures emerge from the competitive melee.

The future is being predicted with more than a little hype. The industry research companies are struggling to figure out how to make reasonable hype-free predictions. A better than average summary can be found here http://harborresearch.com/wp-content/uploads/2013/12/Harbor-Research_Diversified-Industrial-Newsletter.pdf

In the second article, I discuss the evolving architecture of the IoE and speculate on the emerging risks of it all. 

Other papers are on the way... if only I had more time to get them written... :O)



Tags: #IOE #IOT #InternetofThings #InternetofEverything

Paul Gerrard My linkedin profile is here My Mastodon Account

First published 22/09/2017

Twitter is a lousy medium for debate, don't you think?

I had a very brief exchange with Michael Bolton below.  (Others have commented on this thread this afternoon). To my apparently contradictory (and possibly stupid) comment, Michael responded with a presumably perplexed “?”

This blog is a partial explanation of what I said, and why I said it. You might call it an exercise in pedantry. (Without pedantry, there is less joy in the world – discuss). There's probably a longer debate to be had, but certainly not on Twitter. Copenhagen perhaps, Michael? My response was to 3) Lesson.

3) Lesson: don't blindly trust ... your automated checks, lest they fail to reveal important problems in production code.

I took the lesson tweet out of context and ignored the first two tweets deliberately, and I'll comment on those below. For the third, I also ignored the “don't blindly trust your test code” aspect too and here's why. If you have test code that operates at all, and you have automated checks that operate, you presumably trust the test code already. You will have already done whatever testing of the test code you deemed appropriate. I was more concerned with the second aspect. Don't blindly trust the checks.

But you know what? My goal with automation is exactly that – to blindly trust automated checks.

If you have an automated check that runs at all, then given the same operating environment, test data, software versions, configuration and so on, you would hardly expect the repeated check to reveal anything new, unless you'd want to research its intricacies manually and embark on trying to find newer resolutions, through debate and comparisons; for example, the grafana vs kibana comparison. If it did 'fail', then it really ought to flag some kind of alarm. If you are not paying attention or are ignoring the alarm, then on your own head be it. But if I have to be paying attention all the time, effectively babysitting – then my automation is failing. It is failing to replace my manual labour (often the justification for automating in the first place).

A single check is most likely to be run as part of a larger collection of tests, perhaps thousands, so the notification process needs to be integrated with some form of automated interpretation or at least triggered when some pre-defined threshold is exceeded.

Why blindly? Well, we humans are cursed by our own shortcomings. We have low attention-spans, are blind to things we see but aren't paying attention to and of course, we are limited in what we can observe and assimilate anyway. We use tools to replace humans not least because of our poor ability to pay attention.

So I want my automation to act as if I'm not there and to raise alarms in ways that do not require me to be watching at the time. I want my phone to buzz, or my email client to bong, or my chatOps terminal to beep at me. Better still, I want the automation to choose who to notify. I want to be the CC: or BCC: in the message, not necessarily the To: all the time.

I deliberately took an interpretation of Michael's comment that he probably didn't intend. (That's Twitter for you).

When my automated checks run, I don't expect to have to evaluate whether the test code is doing 'the right thing' every time. But over time, things do change – the environment, configuration and software under test – so I need to pay attention to whether these changes impact my test code. Potentially, the check needs adjustment, re-ordering, enhancing, replacing or removing altogether. The time to do this is before the test code is run – during requirements discussions or in collaboration with developers.

I believe this was what Michael intended to highlight: your test code needs to evolve with the system under test and you must pay attention to that.

Now, my response to the tweet suggests rather than babysit your automated checks, you should spend your time more wisely – testing the system in ways your test code cannot (economically).

To the other tweets:

1) What would you discover if you submitted your check code to the same review, scrutiny, and testing as your production code?

2) If you're not scrutinizing test code, why do you trust it any more than your production code? Especially when no problems are reported?

Test code can be trivial, but can sometimes be more complex than the system under test. It's the old old story, “who tests the test code and how?”. I have worked on a a few projects where test code was treated like any other code. High integrity projects and the like. but even then I didn't see much 'test the test code' activity. I'd say there are some common factors that make it less likely you would test your test code, and feel safe (enough) not doing so.
  1. Test code is built incrementally, usually, so that it is 'tried' in isolation. Your test code might simulate a web or mobile transaction, for example? if you can watch it move to fields, enter data, check the outcomes you see correctly, most testers would be satisfied it works as a simple check. What other test is required than re-running it, expecting the same outcome each time?
  2. Where the check is data-driven, of course, the code uses prepared data to fill, click or check parameterised fields, buttons and outcomes respectively. On a GUI app this can be visibly checked. Should you try invalid data (not included in your planned test data) and so on? Why bother? If the test code fails, then that is notification enough that you screwed up – fix it. If the test code flags false negatives when for example your environment changes, then you have a choice: tidy up your environment, or add code to accommodate acceptable environmental variations.
  3. Now, when your test code loses synchronisation or encounters a real mismatch of outcomes, your code needs handlers for these situations. These handlers might be custom-built for every check (an expensive solution) or utilise system-wide procedures to log, recover, re-start or hand-off depending on the nature of the tests or failures. This ought to be where your framework or scaffolding code comes in.
  4. Surely the test code needs testing more than just 'using it'? The thing is, your test code is not handed over to users for them to enter extreme, dubious or poor quality data. All the data it will ever handle is in the test suite you use to test the system under test. Another tester might add new rows of test data to feed it, but problems arising are as likely to be due to other things than new test data. At any rate, what tests would you apply to your test code? Your test data, selected to exercise extremes in your system under test is probably quite well suited to testing the test code anyway.
  5. When problems do arise when your test code is run, it is more likely to be caused by environmental/data problems or software changes, so your test code will be adapted in parallel with these changes, or made more resilient to variations (bearing in mind the original purpose of the test code).
  6. Your scaffolding code or home-grown test frameworks handles this doesn't it? Pretty much the same arguments above apply. They are likely to be made more robust through use, evolution and adaptation than a lot of planned tests.
  7. Who tests the tests? Who tests the tests of the tests? Who tests the tests of ...
Your scaffolding, framework, handlers, analysis, notifications, messaging capability need a little more attention. In fact, your test tools might need to be certified in some way (like standard C, C++, Java compilers for example) to be acceptable to use at all.

I'm not suggesting that under all circumstances, your test code doesn't need testing. But it seems to me, that in most situations, the code that actually performs a check might be tested by your test data well enough and that most exceptions arise as you develop and refine that code and can be fixed before you come to rely on it.

Tags: #testautomation #automation #advancingtesting #automatedregression

Paul Gerrard My linkedin profile is here My Mastodon Account

First published 08/07/2014

It seems like testing competitions are all the rage nowadays. I was really pleased to be invited to the Testing Cup in Poland in early June this year. The competition was followed by a full day conference at which I gave a keynote – The Value of Testing – and you can see a video of that here.

The competition was really impressive with nearly 300 people taking part in the team and individual competitions. Here's a nice video (with English subtitles).



Tags: #valueoftesting #TestingCup #Poland

Paul Gerrard My linkedin profile is here My Mastodon Account