Agile Governance
First published 29/06/2015
Tags: #agile #businessstorymethod #BusinessStoryManager #SP.QA #Agilegovernance
Paul Gerrard My linkedin profile is here My Mastodon Account
My experiences in the Test Engineering business; opinions, definitions and occasional polemics. Many have been rewritten to restore their original content.
First published 29/06/2015
Tags: #agile #businessstorymethod #BusinessStoryManager #SP.QA #Agilegovernance
Paul Gerrard My linkedin profile is here My Mastodon Account
First published 29/06/2015
The slides can be found here.
Tags: #agile #TestStrategy #ProjectProfiling
Paul Gerrard My linkedin profile is here My Mastodon Account
First published 27/06/2014
The nice folk at Testing Cup in Poland have posted a video of my keynote on You tube.
Paul Gerrard My linkedin profile is here My Mastodon Account
First published 08/08/2014
The nice folk at Testing Cup in Poland have posted a video of my keynote on You tube.
I originally gave a EuroSTAR keynote in 2002 titled 'What is the Value of Testing and how can we improve it?'. This talk brings the topic up to date but also introduces the New Model Testing that I'm working on at the moment.
I've written a paper that describes New Model Testing that can be downloaded here.
Paul Gerrard My linkedin profile is here My Mastodon Account
First published 27/03/2013
Did you know? We’re staging some webinars
Last night, we announced dates for two webinars that I will present on the subject, “Story-Based Test Automation Using Free Tools”. Nothing very exciting in that, except that it’s the first time we have used a paid-for service to host our own webinar and marketed that webinar ourselves. (In the past we have always pitched our talks through other people who marketed them).
Anyway, right now (8.40 PM GMT and less than 24 hours since we started the announcements) we have 96 people booked on the webinar. Our GoToWebinar account allows us to accept no more than 100. Looks like a sell-out. Great.
Coincidentally, James Bach and Michael Bolton have revisited and restated their positions on the “testing versus checking” and “manual versus automated testing” dichotomies (if you believe they are dichotomies, that is). You can see their position here: http://www.satisfice.com/blog/archives/856.
I don’t think these two events are related, but it seemed to me that it would be a good time to make some statements that set the scene for what I am currently working on in general and the webinar specifically.
Business stories and testing
You might know that we (Gerrard Consulting) have written and promoted a software development method (http://businessstorymethod.com) that uses the concept of business stories and have created a software as a service product (http://businessstorymanager.com) to support the method. The method is not a test method, but it obviously involves a lot of testing. Testing that takes place throughout the development process – during the requirements phase, development phase, test phase and ongoing post-production phases.
Business stories are somewhat more to us than ‘a trigger for a conversation’, but we’ll use the term ‘stories’ to refer to them from now on.
In the context of these phases, the testing in scope might be called by other names and/or be part of processes other than ‘test’. Requirements prototyping, validation, (Specification by Example/Behaviour-Driven Development/Acceptance Test Driven Development/ Test-Driven Development – take your pick), feature-acceptance testing, system testing, user-testing and regression testing during and after implementation and go-live.
There’s quite a lot of this testing stuff going on. Right now, the Bach-Bolton dialogue isn’t addressing all of this in a general way, so I’m keeping a watching brief on events in that space. I look forward to a useful, informant outcome.
How we use (business) stories
In this blog, I want to talk specifically about the use of stories in a structured domain-specific language (using, for example Gherkin format (see https://github.com/cucumber/gherkin) to example (and that is a KEY word) requirements. I’m not interested in the Cucumber-specific extensions to the Gherkin syntax. I’m only interested in the feature heading (As a…/I want…/So that…) and the scenario structure (given…/when…/then…) etc. and how they are used to test in a broader sense:
Story-based testing and automation
You see, the goals of an automated test (and let me persist in calling them tests for the time being) varies and there are several distinct goals of story-based scenarios as test definitions.
In the context of a programmer writing code, the rote automation of scenarios as tests gives the programmer a head start in their test-driven development approach. (And crafting scenarios in the language of users segues into BDD of course). The initial tests a programmer would have needed to write already exist so they have a clearer initial goal. Whether the scenarios exist at a sufficiently detailed level for programmers to use them as unit-tests is a moot point and not relevant right now. The real value of writing tests and running them first derives from:
There is another benefit of using scenarios as the basis of automated tests. The language of the scenario (which is derived from the businesses’ language in a requirement) can be expected to be reused in the test code. We can expect (or indeed mandate) the programmer to reuse that language in the naming of their variables and objects in code. The goals of Ubiquitous Language in systems (defined by Eric Evans and nicely summarised by Martin Fowler here http://martinfowler.com/bliki/UbiquitousLanguage.html) are supported.
Teams needing to demonstrate acceptance of a feature (identified and defined by a story), often rely on manual tests executed by the user or tester. The tester might choose to automate these and/or other behaviour or user-oriented tests as acceptance regression tests.
Is that it? Automated story tests are ‘just’ regression tests? Well maybe so.
The world is going 'software as a service' and the development world moves closer to continuous delivery approaches every day. The time available to do manual testing is shrinking rapidly. In extremis, to avoid bottlenecks in the deployment pipeline (http://continuousdelivery.com/2010/02/continuous-delivery/) there may be time only to perform cursory manual testing. Manual, functional testing of new features might take place in parallel with development and automation of functional tests must also happen ahead of deployment because automated testing becomes part of the deployment process itself. Perhaps manual testing becomes a test-as-we-develop activity?
But there are two key considerations for this high-automation approach to work:
I wrote a series of four articles on 'Anti-Regression Approaches' here: http://gerrardconsulting.com/index.php?q=node/479. What are the skills of setting up regression test regimes? Not necessarily the same as those required to design functional tests. Primarily, you need automation skills and a knowledge of the internals of the system under test. Are these testing skills? Not really. They are more likely to be found in developers. This might be a good thing. Would it not be best to place responsibility for regression detection on those people responsible for introducing regressions? Maybe developers can do it better?
One final point. If testers are allowed (and I use that word deliberately) to test or validate requirements using stories in the way we suggest, then the quality of requirements provided to developers will improve. And so will the software they write. And the volume of testing we are currently expected to resource will reduce. So we need fewer testers. Or should I say checkers?
This is the essence of the “redistributed testing” offer that we, as testers, can make to our businesses.
The webinar is focused on our technical solution and is driven by the thinking above.
Last time I looked we had 97 registrants on the 4th April Webinar. If you are interested, the 12th April webinar takes place at 10 AM GMT – you can register for it here: https://attendee.gotowebinar.com/register/4910624887588157952
Tags: #testautomation #businessstorymethod #businessstories #BusinessStoryManager #BDD #tdd #ATDD
Paul Gerrard My linkedin profile is here My Mastodon Account
First published 20/09/2017
That sounds like yet another hype-fuelled statement intended to get the attention. It is attention grabbing, but it’s also true. The scope of Digital[1] is growing to encompass the entirety of IT related disciplines and business that depends on it: that is – all business.
It is becoming clear that the scope and scale of Digital will include all the traditional IT of the past, but when fully realised it will include the following too:
The changes that are taking place really are significant because it appears that this decade – the 2010’s – are the point at which several technological and social milestones are being reached. This decade is witness to some tremendous human and technological achievements.
A supercarrier has hundreds of thousands of interconnected systems and with its crew of 5-6,000 people could be compared to an average town afloat. Once at sea, the floating town is completely isolated except for its radio communications with base and other ships.
The supercarrier is comparable to what people are now calling Smart Cities. Wikipedia suggests this definition[7]:
“A smart city is an urban development vision to integrate multiple information and communication technology (ICT) and IoT solutions in a secure fashion to manage a city’s assets – the city’s assets include, but are not limited to, local departments' information systems, schools, libraries, transportation systems, hospitals, power plants, water supply networks, waste management, law enforcement, and other community services.”
The systems of a Smart City might not be as complex as those of an aircraft carrier, but in terms of scale, the number of nodes and endpoints within the system might be anything from a million to billions.
A smart city is not just bigger than an aircraft carrier – it also has the potential to be far more complex. The inhabitants and many of the systems move in the realm of the city and beyond. They move and interact with each other in unpredictable ways. On top of that, the inhabitants are not hand-picked like the military; crooks, spies and terrorists can usually come and go as they please.
Unlike a ship – isolated at sea, the smart city is extremely vulnerable to attack from individuals and unfriendly governments and is comparatively unprepared for attack.
But it’s even more complicated than that.
Nowadays, every individual carries their own mobile system – a phone at least – with them. Every car, bus and truck might be connected. Some will be driverless. Every trash can, streetlight, office building, power point, network access point is a Machine to Machine (M2M) component of a Digital Ecosystem which has been defined thus:
“A Digital Ecosystem is a distributed, adaptive, open socio-technical system with properties of self-organisation, scalability and sustainability inspired from natural ecosystems”[8].
The simplest system might be, for example, a home automation product – where you can control the heating, lighting, TV and other devices using a console, your mobile phone or office PC. The number of components or nodes might be ten to thirty. A medium complexity system might be a factory automation, monitoring and management system where the number of components could be several thousand. The number of nodes in a Smart City will run into the millions.
The range of systems we now deal with spans a few dozen to millions of nodes. In the past, a super-complex system might have hundreds of interconnected servers. Today, systems are now connected using services or microservices – provided by servers. In the future, every node on a network – even simple sensors – is a server of some kind and there could be millions of them.
The scary notion of Big Brother[9] is set to become a reality – systems that monitor our every move, our buying, browsing and social activities – already exist. Deep or Machine Learning algorithms generate suggestions of what to buy, where to shop, who to meet, when to pay bills. They are designed to push notifications to us minute by minute.
Law enforcement will be a key user of CCTV, traffic, people and asset movement and our behaviours. Their goal might be to prevent crime by identifying suspicious behaviour and controlling the movement of law enforcement agents to places of high risk. But these systems have the potential to infringe our civil liberties too.
The legal frameworks of all nations embarking on Digital futures are some way behind the technology and the vision of a Digital Future that some governments are now forming.
In the democratic states, civil liberties and the rules of law are very closely monitored and protected. In non-democratic or rogue states, there may be no limit to what might be done.
A systems view does not do it justice – it seems more appropriate to consider Digital systems as ecosystems within ecosystems.
This text is derived from the first chapter of Paul's book, “Digital Assurance”. If you want a free copy of the book, you can request one here.
[1] From now on I’ll use the word Digital to represent Digital Transformation, Projects and the wide range of disciplines required in the ‘Digital World’.
[2] See for example, http://learn.hitachiconsulting.com/Engineering-the-New-Reality
[3] Internet.org is a Facebook-led organisation intending to bring the Internet to all humans on the planet.
[4] Referred to as ‘Autonomous Business Models’.
[5] http://spaceflight.nasa.gov/shuttle/upgrades/upgrades5.html
[6] http://science.howstuffworks.com/aircraft-carrier1.htm
[7] https://en.wikipedia.org/wiki/Smart_city
[8] https://en.wikipedia.org/wiki/Digital_ecosystem
[9] No, not the reality TV show. I mean the despotic leader of the totalitarian state, Oceania in George Orwell’s terrifying vision, “1984”.
Tags: #assurance #Digital #ALF #DigitalAssurance
Paul Gerrard My linkedin profile is here My Mastodon Account
First published 29/06/2015
Tags: #BDD #SP.QA #RobotFramework #ATDD
Paul Gerrard My linkedin profile is here My Mastodon Account
First published 13/10/2020
Nicholas Snogren posted on LinkedIn a reference to an “Axioms of Testing” presentation from 2009 and asked me to comment on his “Tenets of Software Testing”. There are some similarities but not many I think, some parallel too, but his question prompted me to give a longer response than I guess was expected. I said...
“Hi, thanks for asking for my opinion. Your tenets look interesting – and although I don't think they map directly to what I've written, they raise points in my mind that need a little airing – my mind grows cobwebby over time, and it's good to brush off old ideas. A bit like exercising muscles that haven't been used for a while haha.”
I give my response as a comparison with my Tester's Pocketbook, and Test Axioms website and presentations. (I notice that some of these posts are around 12 years old and some links don't work (anymore). Some are out of my control, others I'll have to track down and correct others – let me know if you want that.)
Firstly, let's get our definitions right.
According to dictionary.com, a Tenet is “any opinion, principle, doctrine, dogma, etc., especially one held as true by members of a profession, group, or movement.” Tenets might be regarded as beliefs that don't require proof and don't provide a strong foundation.
From the same source, an Axiom is, “i) a self-evident truth that requires no proof. ii) a universally accepted principle or rule. iii) Logic, Mathematics. a proposition that is assumed without proof for the sake of studying the consequences that follow from it”
I favoured the use of Axioms as a foundation for thinking in the testing domain. Axioms, if they are defensible, would provide a stronger foundational set of ideas. When I proposed a set of Testing Axioms, there was some resistance – here's a Prezi talk that introduces the idea.
James Bach in particular challenged the idea of Axioms and suggested I was creating a school of testing (when schools of testing were getting some attention) here.
By and large, by defining the Axioms in terms that are context-neutral, challenges have tended to be easy to acknowledge, disarm and set aside. Critics, almost entirely from the context-driven school, jumped the gun so to speak – they clearly hadn't read what I had written at the time before critiquing. Only one or two people responded to James' call to arms to criticise the Axioms and challenged them.
The Axioms are fully described in The Tester's Pocketbook – http://testers-pocketbook.com/.
The Axioms of Testing website – https://testaxioms.com/ – sets out the Axioms with some explanation and provides around 50% of the pocketbook content for free.
Axioms caught the attention (and criticism) of people because I pitched them as universal principles or laws of testing. Tenets, being less strident in their definition might not attract attention (or criticism) in the same way.
The Tenets are numbered and italicised. My comments in plain text.
These are properties of software. I'm not sure what 1 says other than behavior is triggered by interactions and presumably observed through interactions. Although a lot of software behaving autonomously might respond to internal events such as the passing of time and might not exhibit any behaviour through interactions e.g. a change of internal state. I'm not sure 1 says much.
Tenet 2 is reasonable for reasonably-sized software artefacts.
In the Tester's Pocketbook, I hardly use the term software. I prefer that we test systems. Software is usually an important part of every system. Humans do not interact with software (except by reading or writing it). Software exists in the context of program compilation, hosted on operating systems, running on devices which have peripherals and other interconnected systems which may or may not have user interfaces.
Basing Axioms on Systems means that the Axioms are open to interpretation as Axioms of testing ANY system (i.e. anything. I don't press that idea – but it's an attractive one). Another 'benefit' is that all of the Systems Thinking principles can also be brought to bear on our arguments. Outside its context, Software is not a System.
3. Some of those behaviors are potentially negative, that is, would detract from the objectives of the software company or users.
I use the term Stakeholders to refer to parties interested in the valuable, reliable behavior of systems and the outcome and value of testing those systems.
4. The potentiality for that negative behavior is risk.
OK, but it could be better worded. I would simply say 'potential modes of failure' rather than negative behaviour.
5. It’s impossible to guarantee a lack of risk as it’s impossible to experience an infinite number of behaviors.
Not really. You can guarantee a no-risk situation if no one cares or no one cares enough to voice their concerns before testing (or after testing). There is always the potential for failure because systems are complex and we are not skilled enough to create perfect systems.
6. Therefore a subset of behaviors must be sampled to represent the risk.
Rather than represent, I would say trigger the failure(s) of concern to explore the risk and better inform a risk-assessment.
7. The ability to take an accurate sample, representative of the true risk, is a testing skill.
Not sure what you mean by sample – tests or test cases, I presume? Representative is a subjective notion, surely; 'true' I don't understand; and a testing skill would need more definition than this, wouldn't it?
8. A code change to an existing product may also affect the product in an infinite number of ways.
I'd use 'ANY' change, to a 'SYSTEM'. Why 'also'? What would you say fits into a 'not only.... but also...' clause? But I'm not sure I agree with this assertion anyway. A code change changes some software artefact. The infinite effects (faulty behaviors?) derive from infinite tests (or uses in production) – which you say in 5 is impossible to achieve. I'm not sure what you're trying to say here.
9. It is possible to infer that some behaviors are more likely to be affected by that change than others.
You can infer anything you like by calling upon the great Unicorn in the sky. How will you do this? Either you use tools which are limited in capability or you might use change and defect history or you might guess based on partial knowledge and experience.
10. The risk -of that change- is higher within the set of behaviors that are more likely to be affected by that change.
Do you mean probability of failure or the consequence of failure? I assume probability. At any rate, this is redundant. You have already asserted this in 9. But it's also more complicated than this – a cosmetic defect on an app can be catastropic and a system failure negligible at times.
11. The ability to accurately estimate a scope of affected behavior is another testing skill.
I would call this the skills of impact analysis rather than testing. Developers are relatively poor at this, even having a far deeper technical knowledge (either they aren't able or lack the time to impact-analyse to any reliable degree). So we rely on testing to catch regressions which is less than ideal. Testers depend on their experience rather than system internals knowledge. But, since buggy systems behave in essentially unpredictable ways, we must admit our experience is limited and fallible. It's not a 'skill' that I would dwell on.
12. The scope and sampling ideas alone are meaningless without empirical evidence.
The scope and sampling ideas have meaning regardless of whether you implement them. I suppose you might say they are useless ideas if you don't gather evidence.
13. Empirical evidence is gathered through interactions with the product, observation of resultant behavior, and assessment of those observations.
The word empirical is redundant. I would use the word 'some' here. We also get evidence from operation in production, for example. (Unless you include that already?)
14. The accuracy and speed of scope estimation, behavior sampling, and gathering of evidence are key performance indicators for the tester.
If you are implying 13 are tester skills, I suppose you could make this assertion. But you haven't said what the value of evidence is yet. Is the purpose of testing only to evaluate the performance of testers? Hope not ;O)
15. Heuristics for the gathering of such evidence, the estimation of scope, and the sampling of behavior are defined in the Heuristic Test Strategy Model.
Heuristics are available in a wide range of sources including software, systems and engineering standards. Why choose such a limited source?
These tenets were inspired by James Bach’s “Risk Gap” and Doug Hubbard’s book “How to Measure Anything.” Both Bach and Hubbard discuss a very similar idea from different spaces. Hubbard suggests that by defining our uncertainty, we can communicate the value of reducing the uncertainty. Bach describes the “knowledge we need to know” as the “Risk Gap.” This Risk Gap is our uncertainty, and in defining it, we can compute the value of closing it. In testing, I realized we have three primary areas of uncertainty: 1) what is the “risk gap,” or knowledge we need to find out, 2) how can we know when we’ve acquired enough of that unknown knowledge, and 3) how can we design interactions with the program to efficiently reveal this knowledge.
There are several interesting anomalies to unpick here:
You seem to be trying to 'make a case' for testing as a tool to address the risk of failure in systems. I (like and) use that same approach in a rounder sense in my conference talks and writings, when practicable. My observations on this are:
I don't want to give the impression that I'm criticising Nicholas or am arguing against the concept of Tenets or Principles or Axioms of testing. All I have tried to do is offer reasonable criticism of the Tenets to show that is a) extremely difficult to postulate bullet-proof Tenets, Principles or Axioms and b) it is extremely easy to criticise such efforts by:
I do this because I have been there many times since 2008 and occasionally have to defend the Test Axioms from such criticisms. I have to say, Critical Thinking is STILL a rare skill – I wish criticism were more often proffered as a result of it.
Paul Gerrard My linkedin profile is here My Mastodon Account
First published 10/04/2014
I'm working with Lalitkumar who edits the Tea Time With Testers online magazine. It has a large circulation and I've agreed to write an article series for him on 'Testing the Internet of Everything'. I'll also be presenting webinars to go with the articles, the first of which is here: https://attendee.gotowebinar.com/register/1854587302076137473 It takes place on Saturday 19 April at 15.30pm. An unusual time – but there you go.
You can download the magazine from the home page here: teatimewithtesters.com/
Lalit has asked for questions on the article and I'll respond to these during the webinar. But questions on a more broad range of testing-related subjects, I'll write a response for the magazine. But I'll also blog these questions and answers here.
Questions that result an interesting blog will receive a free Tester's Pocketbook - if you go through the TTWT website and contact Lalit - anything goes. I look forward to soem challenging questions :O)
The first Q&A Will appear shortly...
Paul Gerrard My linkedin profile is here My Mastodon Account
A question from Amanda in Louiville, Kentucky USA.
“What's the acceptable involvement of a QA analyst in the requirements process? Is it acceptable to communicate with users or should the QA analyst work exclusively with the business team when interpreting requirements and filling gaps?
As testers, we sometimes must make dreaded assumptions and it often helps to have an awareness of the users' experiences and expectations.”
“Interesting question, Amanda. Firstly, I want to park the ‘acceptable’ part of your question. I’ll come back to it, I promise.
Let me suggest firstly, that collaboration and consensus between users, BAs, developers and testers is helpful in almost all circumstances. You may have heard the phrase ‘three amigos’ in Agile circles to describe user/BA, developer and tester collaboration. What Agile has reminded us of most strongly is that regular and rapid feedback is what keeps momentum going in knowledge based (i.e. software development) projects.
In collaborative teams, knowledge gets shared fast and ‘dreaded assumptions’ don’t turn into disasters. I can think of no circumstance where a tester should not be allowed to ask awkward questions relating to requirements like ‘did you really mean this...?’, ‘what happens if...?’, ‘Can you explain this anomaly?’, ‘If I assume this..., am I correct?’. Mostly, these questions can be prefaced with another.
‘Can I ask a stupid question?’ reduces the chance of a defensive or negative response. You get the idea, I’m sure.
Where there is uncertainty, people make assumptions unless they are encouraged to ask questions and challenge other people’s thinking – to get to the bottom of problems. If you (as a tester) make assumptions, it’s likely that your developers will too (and different assumptions, for sure). Needless to say the users, all along, may be assuming something entirely different. Assume makes an ass of u and me (heard that one before?)
So – collaboration is a very positive thing.
Now, you ask whether it is ‘acceptable’ for testers to talk direct to users. When might it not be “acceptable”? I can think of two situations at least. (There are probably more).
One would be where you as a tester work for a system supplier and the users and BAs work for your customer. Potentially, because of commercial/contractual constraints you might not be allowed to communicate directly. There is a risk (on both sides) that a private agreement between people who work for the supplier and customer might undermine or conflict with an existing contract. The formal channels of communication must be followed. It is a less efficient way of working, but sometimes you just have to abide with commercial rules. Large, government or high-integrity projects often follow this pattern.
Another situation may be this. The BA perceives their role to be the interface between end users and a software project team. No one is allowed to talk direct to users because private agreements can cause mayhem if only some parties are aware of them. The BA is accountable to users and the rest of the project team for changes to requirements. There may be good reasons for this, but if you all work for the same organisation what doesn’t help is a ‘middle man’ who adds no value but distorts (unknowingly, accidentally or deliberately) the question from a tester and the response from a user.
Now, a good (IMHO) BA would see it as perfectly natural to allow testers (and other project participants) to ask questions of users directly, but it is also reasonable for them to be present, to assess consequences, to facilitate discussion, to capture changed requirements and disseminate them. That’s pretty much their job. A tester asking awkward questions is teasing out value and reducing uncertainty – a good thing. Who would argue with that?
But some BAs feel they ‘own’ the relationship with users. They get terribly precious about it and feel threatened and get defensive if other people intervene. In this case, the ‘not acceptable’ situation arises. I have to say, this situation reflects a rather dysfunctional relationship, not a good one. It isn’t helpful, puts barriers in the way of collaboration, introduces noise and error into the flow of information, causes delays and causes uncertainty. All together a very bad thing!
Having said all that, with this rather long reply, I’ve overran some quota or other, I’m sure. The questions I would ask, ‘unacceptable to whom?’ and ‘why?’ Are BAs defending a sensible arrangement or are they being a pain in the assumption?”
Paul Gerrard My linkedin profile is here My Mastodon Account