Paul Gerrard

My experiences, opinions in the Test Engineering business. I am republishing/rewriting old blogs from time to time.

First published 08/08/2014

The nice folk at Testing Cup in Poland have posted a video of my keynote on You tube.

I originally gave a EuroSTAR keynote in 2002 titled 'What is the Value of Testing and how can we improve it?'. This talk brings the topic up to date but also introduces the New Model Testing that I'm working on at the moment. 

I've written a  paper that describes New Model Testing that can be downloaded here.



Tags: #valueoftesting #NewModelTesting #TestingCup

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account

First published 30/04/2013

Huib Schoots asked me to name my top 10 books for testers. Initially, I was reluctant because I'm no fan of beauty parades and if an overall 'Top 10 Books' were derived from voting:

 

  1. The same old same old books would probably top the chart again, and
  2. Any obscure books I nominated would be lost in the overall popularity contest.

 

So what's the point?

 

Nassim Taleb tweeted, just the other day: "You should never allow knowledge to turn into a spectator sport, by condoning rankings, awards, "hotness" &"incentives"" (3rd ethical rule)

— Nassim N. Taleb (@nntaleb) April 28, 2013

Having said that, and with reservations, I sent Huib a list. What books would I recommend to testers? I’m still reluctant to recommend books because it is a personal choice. I can’t say whether anyone else would find these useful or not. But, scanning the bookshelf in my study, I have chosen books that I like and think are *not* likely to be on a tester’s shelf. These are not in any order. My comments are subjective and personal.

  1. Zen and the Art of Motorcycle Maintenance, Robert M Pirsig – Quality is for philosophers, not systems professionals. I try not to use the Q word.
  2. Software Testing Techniques, Boris Beizer - for the stories and thinking, more than the techniques stuff (although that’s not bad).
  3. The Complete Plain Words, Sir Ernest Gower – Use this or your language/locale equivalent to improve your writing skills.
  4. Test-Driven Development, Kent Beck – is TDD about test or design? Does it matter? Testing is about *much* more than finding bugs.
  5. The Tester’s Pocketbook, Paul Gerrard – I wrote this as an antidote to the ‘everything is heuristic’ idea, which I believe is limiting and unambitious.
  6. Systems Thinking, Systems Practice, Peter Checkland – an antidote to the “General Systems Theory” nonsense.
  7. The Tyranny of Numbers, Why Counting Can’t Make Us Happy, David Boyle – The title says it all. Entertaining discussion of targets, metrics and misuse.
  8. Straight and Crooked thinking, Robert H Thouless – Title says it all again. Thouless wrote another book 'Straight and Crooked Thinking in Wartime' 70-odd years ago - a classic.
  9. Logic: an introduction to elementary logic, Wilfred Hodges – Buy this (or a similar) book on predicate logic: few understand what it is, but it’s a cornerstone of critical thinking.
  10. Lectures on Physics, Richard P Feynman – It doesn’t matter whether you understand the Physics, it’s the thought processes he exposes that matters. RPF doesn't do bullsh*t.

11. Game Theory and Economic Behaviour, John von Neuman, Oskar Morgenstern – OK it’s no 11. Don’t read this. Be aware of it. We have the mathematics for when we finally collect data that people can *actually* use to make decisions.

There are many non-testing books that didn't make my list. Near misses include at least one scripting/programming language book, all of Edward Tufte's books, Gause & Weinberg's "Exploring Requirements", DeMarco and Lister's "Peopleware", DeMarco's "Controlling Software Projects" and "Structured Analysis", Nancy Leveson's "Safeware", Constantine and Lockwood's "Software for Use" and lots more. Ten books isn't enough. Perhaps one hundred isn't enough either.

Huib asked me to explain my "General Systems Theory" nonsense comment. OK, nonsense may be too strong. Maybe I should say non-science?

The Systems movement has existed for 70-80+ years. Dissatisfaction with General Systems Theory in 60s/70s caused a split between the theorists and those who wanted to evolve System Thinking into a problem solving approach. Maybe you could call them two ‘schools’? GST was attacked very publicly by the problem-solving movement (the Thinkers) who went down a different path. GST never delivered a ‘grand theory’ and appeared to exist only to publish a journal. The thinker movement proceeded to work in areas covering hard- and soft-systems and decision making support. It's clear that the schism between the two streams caused (or was caused by) antagonism. It is reminiscent of the testing schools 'debate'. A typical criticism in the 1970s was one expert (Naughton) chided another expert (Lilienfield (who attacked GST)) by saying “there was nothing approaching a coherent body of tested knowledge to attack, rather a melange of insights, theorems, tautologies and hunches…”. The problem that the 'Thinkers' had with GST is that most of the insights and theorems are unsubstantiated and not used for anything. GST drifts into the mystical sometimes - it’s almost like astrology. Checkland says: “The problem with GST is it pays for its generality with lack of content” and "... the fact that GST has failed in its application does not mean that systems thinking has failed". The Systems Thinking movement, being focused on problem solving inspired most of the useful (and used) approaches and methods. It’s less ‘pure’ but by being grounded in problem solving, it has more value. Google "General Systems Theory" and you get few hits of relevance and only one or two books written in the last 25 years. Try "Systems Thinking" and you get lots (although there's a certain amount of bandwagon jumping perhaps). Seems to me that GST has been left behind. I don't like GST because it is so vague. It is impossible to prove or disprove the theories, so being a sceptical sort, mostly it says nothing useful that I can trust.

When I say systems Thinking I mean Systems Thinking and Soft Systems Methodology and not General Systems Research and Systems Inquiry

— Paul Gerrard (@paul_gerrard) December 22, 2012

 



Tags: #Top10Books

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account

First published 23/10/2013

This article appeared in the October 2013 Edition of Professional Tester magazine. To fit the magazine format, the article was edited somewhat. The original/full article can be downloaded here.

Big Data

Right now, Big Data is trending. Data is now being captured at an astonishing speed. Any device that has a power supply has some software driving it. If the device is connected to a network or the internet, then the device is probably posting activity logs somewhere. The volumes being captured across organisations are huge – databases of Petabytes (millions of Gigabytes) of data are springing up in large and not so large organisations. Traditional, relational technology simply cannot cope. Mayer-Schonberger and Cukier argue in their book, “Big Data” [1], it’s not that data is huge, it’s that, for all business domains, it seems to be much bigger than we collected before. Big Data can be huge, but the more interesting aspect of Big Data is its lack of structure. The change in the philosophy of Big Data is reflected in three principles.
  1. Traditionally, we have dealt with samples (because the full data set is large), and as a consequence we have tended to focus on relationships that reflected cause and effect. Looking at the entire data set allows us to see details that we never could before.
  2. Using the full data set releases us from the need to be exact. If we are dealing with data points in the tens or hundreds, we focus on precision. If we deal with thousands or millions of data points, we aren’t so obsessed with minor inaccuracies like losing a few records here and there.
  3. We must not be obsessed with causality. If the data tells us there is a correlation between two things we measure, then so be it. We don’t need to analyse the relationship to make use of it. It might be good enough just to know that the number of cups of coffee bought by product owners in the cafeteria correlates inversely with the number of severity 1 incidents in production. (Obviously, I made that correlation up – but you see what I mean). Maybe we should give the POs tea instead?
The interest in Big Data as a means of supporting decision-making is rapidly growing. Larger organisations are creating teams of so-called data scientists to orchestrate the capture of data and analyse it to obtain insights. The phrase ‘from insight to action’ is increasingly used to summarise the need to improve and accelerate business decision-making.

‘From Insight to Action’Activity logs tend to be captured as plain text files with fields delimited by spaces, tabs or commas or as JSON or XML formatted data. This data does not appear validated, structured and integral as it would be in a relational table – it needs filtering, cleaning, enriching as well as storing. New tools designed to deal with such data are becoming available. A new set of data management and analysis disciplines are also emerging. What opportunities are out there for testing? Can the Big Data tools and disciplines be applied to traditional test practices? Will these test practices have to change to make use of Big Data? This article explores how data captured throughout a test and assurance process could be merged and integrated with definition data (requirements and design information) and production monitoring data and analysed in interesting and useful ways.

The original/full article can be downloaded here.

Tags: #continuousdelivery #BigData #TestAnalytics

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account

First published 27/06/2014

The nice folk at Testing Cup in Poland have posted a video of my keynote on You tube.

 



Tags: #ALF

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account

First published 22/12/2015

This article originally appeared on the Ministry of Testing website on 16 Feb 2014 here; http://www.ministryoftesting.com/2014/02/testers-coding-debate-can-move-now/ I have reproduced it unchanged on my own blog as part of a consolidation of all my articles into this place.

Should Testers Learn How to Write Code?

The debate on whether testers should learn how to write code has ebbed and flowed. There have been many blogs on the subject both recent and not so recent. I have selected the ten most prominent examples and listed them below. I could have chosen twenty or more. I encourage you to read references [1, 2, 3, 4, 5, 6, 7, 8, 9, 10].

At the BCS SIGIST in London on the 5th December 2013, a panel discussion was staged on the topic “Should software testers be able to code?” The panellists were: Stuart Reid, Alan Richardson, Dot Graham and myself. Dot recorded the session and has very kindly transcribed the text of the debate. I have edited my contributions to make more sense than I appear to have made ‘live’. (I don’t know if the other contributors will refine their content and a useful record will emerge). Alan Richardson has captured some pre- and post-session thoughts here – “SIGIST 2013 Panel – Should testers be able to code? [11]. I have used some sections of the comments I made at the session in this article.

It’s easy to find thoughtful comments on the subject of testers and coding skills. But why are smart people still writing about the subject? Hasn’t this issue been resolved yet?  There’s a certain amount of hand-wringing and polarisation in the discussion. For example, one argument goes, if you learn how to code, then either:

a)    You are not, by definition, a tester anymore; you are a programmer and

b)    By learning how to code, you may go native, lose your independence and become a less effective tester.

Another perfectly reasonable view is that you can be a very effective tester without knowing how to code if your perspective is black-box or functional testing only.

I’d like to explore in this article how I think the situation is obviously not black-and-white. It’s what you do, not what you know, that frames your role but also that adding one skill to your skills-set does not reduce the value of another. I’d like to move away from the ‘should I, shouldn’t I’ debate and explore how you might acquire capabilities that are more useful for you personally or your team – if your team need those capabilities.

The demand for coding skills is driven by the demand for capabilities in your project. In a separate article I’ll be proposing a ‘road-map’ for tester capabilities that require varying programming kills.

My Contribution to the ‘Debate’

Before we go any further, let me make a few position statements derived from the Q&A of the SIGIST debate. By the way, when the SiGIST audience were asked, it appeared that more than half confirmed that they had programming skills/experience.

Software testers should know about software, but don’t usually need to be an expert

Business acceptance testers need to know something of the business that the system under test will support. A system tester needs to know something about systems, and systems thinking. Software testers ought to know something about software, shouldn’t they? Should a tester know how to write code? If they are looking at code figuring ways to test it, then probably. And if they need to write code of their own or they are in day to day contact with developers helping them to test their code then technical skills are required. But what a tester needs to know depends on the conversations they need to have with developers.

Code comprehension (reading, understanding code) might be all that is required to take part in a technical discussion. Some programming skills, but not necessarily at a ‘professional programmer level’, are required to create unit tests, services or GUI test automation, test data generation, output scanning, searching and filtering and so on. The level of skill required varies with the task in hand.

New skills only add, they can’t subtract

There is some resistance to learning a programming language from some testers. But having skills can’t do you any harm. Having them is better than not having them; new skills only add, they don’t subtract.

Should testers be compelled to learn coding skills?

Most of us live in free countries, so if your employer insists and you refuse, then you can find a job elsewhere. But is it reasonable to compel people to learn new skills? It seems to me that if your employer decides to adopt new working practices, you can resist the change on the basis of principle or conscience or whatever, but if your company wishes to embed code-savvy testers in the development teams it really is their call. You can either be part of that change or not. If you have the skills, you become more useful in your team and more flexible too of course.

How easy is it to learn to code? When is the best time to learn?

Having any useful skill earlier is better than later of course, but there’s no reason why a dyed-in-the-wool non-techy can’t learn how to code. I suppose it’s harder to learn anything new the older you are, but if you have an open mind, like problem-solving, precise thinking, are a bit of a pedant and have patience – it’s just a matter of motivation.

However, there are people who simply do not like programming or find it too hard or uncomfortable to think the way a programmer needs to think. Some just don’t have the patience to work this way. It doesn’t suit everyone. The goal is not usually to become a full time programmer, so maybe you have to persist. But ultimately, it’s your call whether you take this path.

How competent at coding should testers be?

My thesis is that all testers could benefit from some programming knowledge, but you don’t need to be as ‘good’ a programmer as a professional developer in order to add value. It depends of course, but if you have to deal with developers and their code, it must be helpful to be able to read and understand their code. Code comprehension is a less ambitious goal than programming. The level of skill varies with the task in hand. There is a range of technical capabilities that testers are being asked for these days, but these do not usually require you to be professional programmer.

Does knowing how to code make you a better tester?

I would like to turn that around and say, is it a bad thing to know how to write code if you’re a tester? I can’t see a downside. Now you could argue: if you learn to write code, then you’re infected with the same disease that the programmers have – they are blind to their own mistakes. But testers are blind to their own mistakes too. This is a human failing, not one that is unique to developers.

Let’s take a different perspective: If you are exploring some feature, then having some level of code knowledge could help you to think more deeply about the possible modes (the risks) of failure in software and there’s value in that. You might make the same assumptions, and be blind to some assumptions that the developer made, but you are also more likely to build better mental models and create more insightful tests.

Are we not losing the tester as a kind of proxy of the user?

If you push a tester to be more like a programmer, won’t they then think like a programmer, making the same assumptions, and stop thinking of or like the end user?

Dot Graham suggested at the SiGIST event, “The reason to separate them (testers) was to get an independent view, to find the things that other people missed. One of the presentations at EuroSTAR (2013) was a guy from an agile team who found that all of the testers had ‘gone native’ and were no longer finding bugs important to users. They had to find a way to get independence back.”

On the other hand, by separating the testers, the team lose much of the rapid feedback which is probably more important than ‘independence’. Independence is important, but you don’t need to be in a separate team (with a bureaucratic process) to have an independent mindset – which is what really matters. The independence, wherever the tester is based, is their independent mind whether it’s at the end or working with the developer before they write the code.

There is a Homer Simpson quote [12]: “How is education supposed to make me feel smarter? Besides, every time I learn something new, it pushes some old stuff out of my brain. Remember when I took that home winemaking course, and I forgot how to drive?”

I don’t think that if you learn how to code, you lose your perspective as a subject matter expert or experience as a real user, although I suppose there is a risk of that if you are a cartoon character. There is a risk of going native if, for example, you are a tester embedded with developers. By the same token, there is a risk that by being separated from developers you don’t treat them as members of the same team, you think of them as incompetent, as the enemy. A professional attitude and awareness of biases are the best defences here.

Why did we ever separate testers from developers? Suppose that today, your testers were embedded and you had to make a case that the testers should be extracted into a separated team. I’m not sure the case for ‘independence’ is so easily made because siloed teams are being discredited and discouraged in most organisations nowadays.

What is this shift-left thing?

There seem to be a growing number of companies who are reducing their dependency on scripted testing. The dependency on exploratory testers and of testers ‘shifting left’ is increasing.

Right now, a lot of companies are pushing forward with shift-left, Behaviour-Driven Development, Acceptance Test-Driven Development or Test-Driven Development. In all cases, someone needs to articulate the examples – the checks – that drive these processes. Who will write them, if not the tester? With ATDD, BDD approaches, communication is supported with stories, and these stories are used to generate automated checks using tools.

Companies are looking to embed testers into development teams to give the developers a jump start to do a better job (of development andtesting). An emerging pattern is that companies are saying, “The way we’ll go Agile is to adopt TDD or BDD, and get our developers to do better testing. Obviously, the developers need some testing support, so we’ll need to embed some of our system testers in those teams. These testers need to get more technical.”

One goal is to reduce the number of functional system testers. There is a move to do this – not driven by testers – but by development managers and accountants. Testers who can’t do anything but script tests, follow scripts and log incidents – the plain old functional testers – are being offshored, outsourced, or squeezed out completely and the shift-left approach supports that goal.

How many testers are doing BDD, ATDD or TDD?

About a third of the SIGIST audience (of around 80) raised their hands when asked this. That seems to be the pattern at the moment. Some companies practicing these approaches have never had dedicated independent testers so the proportion of companies adopting these practices may be higher.

Shouldn’t developers test their own code?

Glen Myers’ book [12] makes the statement, “A programmer should avoid attempting to test his or her own program”. We may have depended on that ‘principle’ too strongly, and built an industry on it, it seems. There are far too many testers who do bureaucratic paperwork shuffling – writing stuff down, creating scripts that are inaccurate and out of date, processing incidents that add little value etc. The industry is somewhat bloated and budget-holders see them as an easy target for savings. Shift-left is a reassessment and realignment of responsibility for testing.

Developers can and must test their own code. But that is not ALL the testing that is done, of course.

Do testers need to re-skill?

Having technical skills means that you can become a more sophisticated tester. We have an opportunity, on the technical side, working more closely – pairing even – with developers. (Although we should also look further upstream for opportunities to work more closely with business analysts).

Testers have much to offer to their teams. We know that siloed teams don’t work very well and Agile has reminded us that collaboration and rapid feedback drive progress in software teams. But who provides this feedback? Mostly the testers. We have the right skills and they are in demand. So although the door might be closing on ‘plain old functional testers’ the window is open and opportunities emerging to do really exciting things. We need to be willing to take a chance.

We’re talking about testers learning to code but what about developers learning to test better? Should organizations look at this?

Alan Richardson: We need to look at reality and listen to people on the ground. Developers can test better, business analysts can test better – the entire process can be improved. We’re discussing testers because this is a testing conference. I don’t know if other conferences are discussing these things, but developers are certainly getting better at testing, although they argue about different ways of doing it. I would encourage you to read some of the modern development books like “Growing Object-Oriented Software Guided by Tests” [14] or Kent Beck [15]. That’s how developers are starting to think about testing, and this has important lessons for us as well.

There is no question that testers need to understand how test-driven approaches (BDD, TDD in particular) are changing the way developers think about testing. The test strategy for a system and testers in general must take account (and advantage) of these approaches.

Summary

In this article, I have suggested that:
  • Tester programming skills are helpful in some situations and having those skills would make a tester more productive
  • It doesn’t make sense to mandate these skills unless your organization is moving to a new way of working, e.g. shift-left
  • Tester programming skills rarely need to be as comprehensive as a professional programmer’s
  • A tester-programming training syllabus should map to required capabilities and include code-design and automated checking methods.
We should move on from the ‘debate’ and start thinking more seriously about appropriate development approaches for testers who need and want more technical capabilities.

References

  1. Do Testers Have to Write Code?, Elizabeth Hendrickson, http://testobsessed.com/2010/10/testers-code/
  2. Cem Kaner, comments on blog abovehttp://testobsessed.com/2010/10/testers-code/comment-page-1/#comment-716
  3. Alister Scott, Do software testers need technical skills?,http://watirmelon.com/2013/02/23/do-software-testers-need-technical-skills/
  4. Markus Gartner, Learn how to program in 21 days or so,http://www.associationforsoftwaretesting.org/2014/01/23/learn-how-to-program-in-21-days-or-so/
  5. Schmuel Gerson, Should/Need Testers know how to Program,http://testing.gershon.info/201003/testers-know-how-to-program/
  6. Alan Page, Tear Down the Wall, http://angryweasel.com/blog/?p=624, Exploring Testing and Programming, http://angryweasel.com/blog/?p=613,
  7. Alessandra Moreira, Should Testers Learn to Code?http://roadlesstested.com/2013/02/11/the-controversy-of-becoming-a-tester-developer/
  8. Rob Lambert, Tester’s need to learn to code,http://thesocialtester.co.uk/testers-need-to-learn-to-code/
  9. Rahul Verma, Should the Testers Learn Programming?,http://www.testingperspective.com/?p=46
  10. Michael Bolton, At least three good reasons for testers to learn how to program, http://www.developsense.com/blog/2011/09/at-least-three-good-reasons-for-testers-to-learn-to-program/
  11. Alan Richardson, SIGIST 2013 Panel – Should Testers Be Able to Code, http://blog.eviltester.com/2013/12/sigist-2013-panel-should-testers-be.html
  12. 50 Funniest Homer Simpson Quotes,http://www.2spare.com/item_61333.aspx
  13. Glenford J Myers, The Art of Software Testing
  14. Steve Freeman and Nat Pryce , Growing Object-Oriented Software Guided by Tests, http://www.growing-object-oriented-software.com/
  15. Kent Beck, Test-Driven Development by Example
  16. A Survey of Literature on the Teaching of Introductory Programming, Arnold Pears et al.,http://www.seas.upenn.edu/~eas285/Readings/Pears_SurveyTeachingIntroProgramming.pdf


Tags: #Coding #TestersLearningCode #TechnicalTesters #Programming

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account

First published 17/06/2016

A further response to the debate on here https://www.linkedin.com/groups/690977/690977-6145951933501882370?trk=hp-feed-group-discussion. I couldn't fit it in a comment so put it here instead.

Hi Alan. Thanks - I'm not sure we are disagreeing, I think we're debating from different perspectives, that's all.

Your suggestion that other members of our software teams might need to re-skill or up-skill isn't so far-fetched. This kind of re-assignment and re-skilling happens all the time. If a company needs you to reskill because they've in/out/right-sourced, or merged with another company, acquired a company or been acquired - then so be it. You can argue from principle or preference, but your employer is likely to say comply or get another job. That's employment for you. Sometimes it sucks. (That's one reason I haven't had a permanent job since 1984).
 
My different perspective? Well i'm abolutely not arguing from the high altar of thought-leadership, demagoguery or dictatorship. Others can do that and you know where they can stick their edicts.

Almost all the folk I meet in the testing services business are saying Digital is dominating the market right now. Most organisations are still learning how to do this and seek assistance from wherever they can get it. Services business may be winging it but eventually the dust will settle, they and their clients will know what they are doing. The job market will re-align to satisfy this demand for skills. It was the same story with client/server, internet, Agile, mobile and every new approach - whatever. It's always the same with hype - some of it becomes our reality.

(I'm actually on a train writing this - I'm on my way to meet a 'Head of Digital' who has a testing problem, or perhaps 'a problem with our testers'. If I can, I'll share my findings...)

Not every company adopts the latest craze, but many will. Agile (with a small a), continuous delivery, DevOps, containerisation, micro-services, shift-left, -right or whatever are flavour of the month (althoough agile IMHO has peaked). A part of this transition or transformation is a need for more technical testers - that's all. The pressure to learn code is not coming from self-styled experts. It is coming from the job market which is changing rapidly. It is mandatory, only if you want to work in these or related environments.

The main argument for understanding code is not to write code. Code-comprehension, as i would call it, is helpful if your job is to collaborate more closely with developers using their language, that's all.



Tags: #Testers #learntocode

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account

First published 05/10/2015

It was inevitable that people would compare my formulation of a New Model for Testing with the James Bach & Michael Bolton demarcation of or distinction between 'testing versus checking'. I've kind of avoided giving an opinion online, although I have had face to face conversations with people who both like and dislike this idea and expressed some opinions privately. The topic came up again last week at StarWest so I thought I'd put the record straight.

I should say that I agree with Cem Kaner's rebuttal of the testing v checking proposal. It is clear to me that although the definition of checking is understandable, the definition of testing is less so, evidenced by the volume of comments on the authors' own blogs. In my discussions with people who support the idea, it was easy to agree on checking as a definition, but testing, as defined, seemed much harder for them to defend in the face of experience.

Part of the argument for highlighting what checking is, is to posit that we cannot rely on checking, particularly with tools, alone. Certainly, brainless use of tools to check – a not infrequent occurrence – is to be decried. But then again, brainless use of anything… But it is just plain wrong that we cannot rely on automated tests. Some products, cannot be tested in any other way. Whether you like it or not – that's just the way it is.

One reason I've been reticent on the subject is I honestly thought people would see through this distinction quickly, and it would be withdrawn or at least refined into something more useful.

Some, probably many, have adopted the checking definition. But few have adopted the testing definition, such as it is, and debated it with any conviction. It looks like I have to break cover.

It would be easy to compare exploration in the New Model with the B & B view of testing and my testing with their view of checking. There are some parallels but comparing them only serves to highlight the differences in perspectives of the authors. We don't think the same, and that's for sure.

From my perspective, perhaps the most prominent argument against the testing v checking split is the notion that somehow testing (if it is simply their label for what I call exploration) and checking are alternatives. The sidelining of checking as something less valuable, intellectual or effective doesn't match experience. The New Model reflects this in that the tester explores sources of information to create models that inform testing. Whether these tests are in fact checks is important, but the choice of scripting as a means of recording a test for use in execution (by tools or people) is one of logistics – it is, dare I say, context-specific.

The exploration comes before the test. If you do not understand what the system should (or should not) do, you cannot formulate a meaningful test. You can enquire what a system might do, but who is to say whether that behaviour is correct or otherwise, without some input from a source of knowledge other than the system itself. The SUT cannot be its own test oracle. The challenges you apply during exploration of the SUT are not tests – they are trials of your understanding (your model) of the actual behaviour of the SUT.

Now, in most situations, it is extremely hard to trust a requirement, however stated, is complete, correct, unambiguous – perfect. In this way, one might never comfortably decide the time is right for testing or checking (as I have scoped it). The New Model implies one should persevere to improve the requirements/sources and aligned with your mental model before you commit to (coding or) testing. Of course, one has to make that transition sometime and that's where the judgement comes in. Who can say what that judgement should be except that it is a personal, consensual or company-cultural decision to proceed.

Exploration is a dynamic activity, whereby you do not usually have a fully formed view of what the system should do, so you have to think, model and try things based on the model as it stands. Your current model enables you to make predictions on behaviour and to try these predictions on the SUT or stakeholders or against any other source of knowledge that is germane to the challenge at hand.

Now, I fully appreciate our sources of knowledge are fallible. This is part and parcel of the software development process and why there are feedback loops in (my version of) exploration. But I argue that the exploration part of the test process (enquiring, modelling, predicting and challenging) are the same for developers as they are for testers.

The critical step in transitioning from exploration to testing, or in the case of a developer, to writing the code based on their understanding (synonymous with the 'model' in their head) is where the developer or tester believes they understand the need and trust their model or understanding. Until that moment, they remain in the exploration state, are uncertain to some degree and are not yet confident (if that is the right term) that they could decide whether a system behaviour is correct or not or just 'interesting'.

If a developer or tester proceeds to coding or testing before they trust their model, then it's likely the wrong thing will be built or tested or it will be tested badly.

Now just to take it further, a tester would not raise a defect report while they are uncertain of the required behaviour of the SUT. Only when they are confident enough to test would it be reasonable to do so. If you are not in a position to say 'this works correctly according to my understanding of the requirement (however specified)' you are not testing, you are exploring your sources of information or the SUT.

In the discussion above, the New Model appears to align with this rather uncertain process called exploration.

Now, let's return to the subject of testing versus checking. Versus is the wrong word, I am sure. Testing and checking are not opposed or alternatives. Some tests can be scripted in some way, for the use of people or automated tools. In order to reach the point in one's understanding to flip from exploration to testing, you have to have done the ground work. In many ways, it takes more effort, thinking, modelling to reach the requisite understanding to construct a script or procedure to execute a check than to just learn what a system does through exploration, valuable though that process is.

As an aside, it's very hard to delineate where scripted and unscripted testing begin and end anyway. If I say, 'test X like so, and keep your eyes open' is that a script or an exploratory charter?

In no way can checking be regarded as less sophisticated, useful, effective or requiring less effort than testing without a script. The comparison is spurious as for example, some systems can be tested in no other way than with scripted tooling. In describing software products, Philip Armour (in his book, 'the Laws of Software Process') says that software is not a product, but rather 'the product is the knowledge contained in the software'. Software is not a product, it is a medium.

The only way a human can test this knowledge product is through the layers of hardware (and software) that must be utilised in very specific ways. In almost every respect, testing is dependent on some automated support. So, as Cem says, at some level, 'all tests are automated... all tests are manual'.

Since the vast majority of software has no user interface, it can only ever be checked using tools. (Is testing in the B & B world only appropriate to a user interface then?) On the other hand, some user interfaces cannot be automated in any viable way (often because of the automated technology between human and the knowledge product). That's life, not a philosophical distinction.

The case can be made that people following scripts by rote might be blinkered and miss certain aspects of incorrect behaviour. This is certainly the case, especially if people are asked to follow scripts blindly. In all my experience of testing in thirty years no tester has ever been asked to be so blinkered. In fact, the opposite is often true, and testers are routinely briefed to look out for anything anomalous precisely to address the risk of oversight. Of course, humans make mistakes and oversight is an inevitability. However, it could also be said that working to a script makes the tester more eagle-eyed – the formality of scripted testing, possibly witnessed (akin to pairing, in fact) is a serious business.

On the other hand, people who have been asked to test without scripts, might be unfocused, lazy almost, unthinking and sloppy. They are hard to hold to account as a list of features or scenarios to cover in some way in a charter, without thinking or modelling is unsafe.

What is a charter anyway? It's a high level script. It might not specify test steps, but more significantly it usually defines scope. An oversight in a scripted test might let a bug through. An oversight in a charter might let a whole feature go untested.

Enough. The point is this. It is meaningless and perverse to compare undisciplined, unskilled scripted or unscripted testing with its skilled, disciplined counterpart. We should be paying attention to the skills of the testers we employ to do the job. A good tester, investing the effort on the left hand side of the New Model will succeed whether they script or not. For that reason alone, we should treat the scripted/unscripted dichotomy as a matter of logistics, and ignore it from the perspective of looking at testers skills.

We should be thankful that, depending on your project, some or all testing can be scripted/automated, and leave it at that.

Tags: #testingvchecking #NewModelTesting

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account

First published 04/09/2014

In advance of the keynote I'm giving at the BCS SIGIST meeting today, I have uploaded two recent papers that were written for the Tea Time wth Testers magazine.

There are two articles in the series so far:

1. The Internet of Everything  – What is it and how will it affect you?
 
In this article series I want to explore what the IoT and IoE are and what we need to start thinking about. I’ll approach this from the point of view of a society that embraces the technology. Then I will look more closely at the risks we face and finally how we as the IT community in general and the testing community in particular should respond.
 
In the first article of the series I will look at what IoE is and how it affects us all. It’s important you get a perspective for what the IoE so you get a sense of the scale, the variety, the ubiquity, complexity and challenge of the technological wave that many people believe will dominate our industry for the next ten to twenty years.
 
Let me start the whole series off with what sounds a bit like science fiction, but will soon be science fact. John Smith and his family are an invention.
 
 

2. ​Internet of Everything  – Architecture and Risks

Right now, it is a very confusing state of affairs, but clearly, an awful lot of effort and money is being invested in defining the standards and building business opportunities from the promise of the new Internet world order. Standards are on the way, but today, most applications are what you might call bleeding edge, speculative or exploratory. It looks like standards will reveal themselves when successful products and architectures emerge from the competitive melee.

The future is being predicted with more than a little hype. The industry research companies are struggling to figure out how to make reasonable hype-free predictions. A better than average summary can be found here http://harborresearch.com/wp-content/uploads/2013/12/Harbor-Research_Diversified-Industrial-Newsletter.pdf

In the second article, I discuss the evolving architecture of the IoE and speculate on the emerging risks of it all. 

Other papers are on the way... if only I had more time to get them written... :O)



Tags: #IOE #IOT #InternetofThings #InternetofEverything

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account

First published 08/07/2014

It seems like testing competitions are all the rage nowadays. I was really pleased to be invited to the Testing Cup in Poland in early June this year. The competition was followed by a full day conference at which I gave a keynote – The Value of Testing – and you can see a video of that here.

The competition was really impressive with nearly 300 people taking part in the team and individual competitions. Here's a nice video (with English subtitles).



Tags: #valueoftesting #TestingCup #Poland

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account

First published 12/06/2013

This is the first of what will become a regular webinar slot.

The aim of these webinars is to attempt to answer testing-related questions from the audience. We ask people to submit questions as they register and we'll prepare answers for the live webinar, and attempt to answer additional questions at the end of the session.

During the webinar Paul shares his views on the questions, based on extensive experience in software development and testing in particular.

In this session, Paul addresses the following questions:

  1. How does testing fit into an Agile environment?
  2. How do you get into software testing without experience?
  3. Do you need a certification to get a testing job?
  4. Who represents the testing community?
  5. All our test automation is at the end and it is unmanageable. How do we reduce the work and speed up test execution?
  6. As a tester, what areas should I be watching/researching?
  7. I am interested in the number of test environments used and the recommended frequency of code updates to be applied
  8. Why do testers get the blame?


Tags: #FAQ #TestersQuestionTime #TQT

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account