Paul Gerrard

My experiences, opinions in the Test Engineering business. I am republishing/rewriting old blogs from time to time.

First published 15/10/2016

Last month, I presented a webinar for the EuroSTAR conference. “New Model Testing: A New Test Process and Tool” can be seen below. To see it, you have to enter some details – this is not under my control, but the EuroSTAR conference folk. You can see the slides below.

The value of documentation prepared in structured/waterfall or agile projects is often of dubious value. In structured projects, the planning documentation is prepared in a knowledge vacuum – where the requirements are not stable, and the system under test is not available. In agile projects – where time is short and other priorities exist – not much may get written down anyway. I believe the only test documentation that could be captured reliably and be trusted must be captured contemporaneously with exploration and test.

The only way to do this would be using a pair tester or a bot to capture the thoughts of a tester as they express them. I've been working on a prototype robot that can capture the findings of the tester as they explore and test. The bot can be driven by a paired tester, but it has a speech recognition front-end so it can be used as a virtual pair.

From using the bot, it is clear that a new exploration and planning metaphor is required – I suggest Surveying – and we also need a new test process.

In the webinar, I describe my experiences of building and using the bot for paired testing and also propose a new test process suitable for both high integrity and agile environments. The bot – codenamed Cervaya™ – builds a model of the system as you explore and captures test ideas, risks and questions and generates structured test documentation as a by-product.

If you are interested in collaborating - perhaps as a Beta Tester - I'd be delighted ot hear from you.

The slides can be seen here:

New Model Testing: A New Test Process and Tool from TEST Huddle


Tags: #NewModelTesting #Cervaya

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account

First published 05/08/2015

The BBC asked me to pitch an 'interesting talk' to an audience at the BBC Radio theatre in Portland Place, London. The BBC was running a Digital Open Day aimed at recruiting people. My contribution was an introduction to the New Model for Testing and you can play the video here. Except you can't as they seem to have withdrawn that and many other videos, for some reason.

So try this version that was a talk I gave in Minsk, Belarus. Scroll to 24 minutes if you don't want the whole context.

By the way...

It was quite a thrill to be on the stage of the Radio Theatre. When I was a kid, I used to avidly listen to the BBC Radio 1 'In Concert' live shows on a Saturday night. I just discovered that the BBC have catalogued these shows (and many others) here: http://www.bbcradioint.com/ContentFiles/In_Concert_Catalogue.pdf As you can see, there's a stellar list of performers.

Tags: #NewModelTesting #BBC

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account

First published 11/02/2013

This tutorial suggests that rather than being a document, test strategy is a thought process. The outcome of the thinking might be a short or a long document, but most importantly, the strategy must address the needs of the participants inside the project as well as the customers of the product to be built. It needs to be appropriate to a short agile project or to a 1000 man-year development. It has to have the buy-in of stakeholders but most importantly, it must have value and be communicated.

This tutorial presents a practical definition of a Test Strategy, provides a simple template for creating one and describes a systematic approach to thinking the right way. This will be an interactive session. Bring your test strategy problems with you – we'll try and address them during the day. You will receive a free copy of the Tester's Pocketbook.

Dates & Venues

19 February 2013 – London 09 April 2013 – London 21 May 2013 – London 16 June 2013 – London 10 September 2013 – London 22 October 2013 – London 10 December 2013 – London

Download the course brochure here.

Visit the Unicom website for price and booking details.

Tags: #TestStrategy #inaday

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account

First published 10/05/2016

Some Labels are Bad, Get Over It

There's been a thoughtful debate on the notion of 'test automation' being a rather bad, lazy, misrepresentation as a concept or entity or thing. James Bach and Michael Bolton wrote a paper here. Alan Richardson, in Testing Circus Magazine invites you to join the Anti Test-Automation Brigade. Pretty much, I agree with the sentiments expressed. The term is bad, it exhibits and probably promotes muddled thinking. I will do my best not to use it. Unless I need to reference the range of tools that dominate the testng tools market.

I'll return to this a bit later.

My Dad called himself an engineer (or rather, his company did). In his national service, he joined the Royal Engineers and was very proud of his time in the post-war, peacetime Army. He told me that he spent much of his time going on training courses. When offered the choice of a course on "4.5 ton truck engine maintenance" or two weeks tramping around muddy fields in Germany, unlike many of his comrades who preferred the mud to a classroom or workshop, he chose the courses.

He also did real some engineering - building and dismantling bridges, towing tanks out of ditches and other hearty activities. (Once a tank was bogged down near a churchyard. Dad's idea was to pass a cable around the church to improve the towing angle on the cable. When the recovery truck pulled, the tank was unmoved, but the church started to disintigrate. They stopped the truck and figured out another way to extract the tank, leaving the church intact to great relief). And so on.

So when the time came to selecting a course at university - an engineering degree was the natural choice for me. I thought I would be part of a select, engineering elite. Yes, there were civil, electrical and mechanical engineers, but they were all a happy and respected band of professionals. Then I worked at a large civil engineering consultancy. Everyone was an engineer. And then I noticed, half the male population called themselves engineers of one sort or another. When I got into the software business, I discovered some software people also called themselves engineers. This was a step too far.

Now, my main criteria for the definition of an engineer is someone who works with concepts, designs or materials whose behaviour are underpinned by the laws of physics. Software, in my humble opinion, does not obey any laws of Physics that I know - so it is not an engineering discipline. Discuss. At any rate, it seems nowadays that anyone can call themselves an engineer. Raging against this calumny seems not to be a good use of my time.

The testing tool vendors are not about to change their marketing and rename all their tools on the basis of complaints from a few vociferous individuals. So I suspect we are stuck with the Test automation term for the foreseeable future. I support the view that test automation is a lousy term, but I don't thnk I'm going to get excited about removing or replacing it.

However.

A Bad Term, Well-Scoped?

The view that test automation does not encompass support for all of testing is obvious and if test automation does have a scope, it covers the application of tests (or checks). Some tools can perform logging and reporting, possibly, but let me ignore that for the moment. I am using the teminology of the New Model below.

New Model of Testing

Let me suggest test automation refers to the 'Applying' activity only. If you add in other utilities that come under the banner of  "testing with tool support" - these utilities mostly fall into the Applying aspect of testing. Building environments, preparing test data, generating combinations, analysing results, editors, comparators, analysers, harnesses, frameworks, loggers, cleaner-uppers and tear-downers and so on are all pretty much covered by the Applying activity. So if test automation represents the Applying activity only, it may not be well named, but it could at least be well-scoped.

Perhaps we should rename Test Automation ... Test Application? Or just Application?

More importantly, the nine other activities in the New Model plus the judgements of model adequacy or inadequacy hardly get a mention in the argument which ignores the opportunities to support thinking, collaboration, modelling, test design, record-keeping and so on.

We Need to Curb Our Enthusiasm (or Addiction)

It seems to me that the testing industry - both testers and vendors - are obsessed with test automation to the degree that (with some exceptions) testers are not demanding product to support exploration and modelling activities on the left hand side of the New Model and vendors are not investing in R&D. There are signs this is changing but it has been a slow process so far. Supporting these activities could bring huge benefits to testers. The market for these tools (required by all testers possibly) is also huge. Why are test design tools not being demanded and delivered?

If we label all test tools test automation, it take our eyes of the real prize - tools to support all of testing. Testers and vendors need a broader perspective. 

I've been banging on about wanting Test Design tools rather than Test Execution tools for some time. I dug out a talk of 1999 which I gave several times including a variation at STAR East 1999. I am quite pleased - maybe 75% could be repeated today without embarrassment. But I am also rather depressed that I am making almost identical recommendaitons nearly sixteen years later. Click on the image or here to see the slides.

CAST Past Present and Future

What am I Doing About This?

Firstly, you may know I am working to create a comprehensive tools directory at https://tkbase.com - it covers DevOps, Collaboration and Testing and there are not ten or twenty tool types - there are over 180 tool types with 2414 tools listed.

Second, I am writing a paper on the applicaiton of automation AI and Deep Learning to support the other (exploration, thinking, collaboration and record keeping) activities of testing.

Finally, for the last few months, I've been writing and experimenting with a robot that supports exploration and testing - exploratory testing - if you like. It will support team exploration, pair testing and can also be voice controlled. I'll have something to demonstrate in the next month or so. I'll be talking about experiences with this more fully later in the year. If you are interested in being a pre-alpha tester - do get in touch :O)

What are you doing to further the case for tools that support all of testing?



Tags: #engineers #Testautomation.NewModelTesting

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account

First published 10/02/2017

The Test Management Forum was set up in 2004 and the first meeting took place on 28 January of that year. We have run events every quarter since then, and the 53rd and latest meeting took place at the end of January. Over 2,600 people have attended the London events over the years. Although most Forum attendees come from the UK, the uktmf.com website has approaching 10,000 registered users worldwide and the LinkedIn group 11,800 members.

The TMF continues to be a popular Forum with a global (and not just a UK) influence.

The original aim of the Forum was to bring together more senior practitioners, to network and share knowledge. The core of the Forum meetings are discussion sessions. They are run by an expert in their field and comprise an introductory presentation of 15-30 minutes or so, followed by a facilitated discussion of the issues raised in the talk. Sessions are 75 minutes long and often stir up vigorous debate. The format and ethos of the Forum are unchanged since 2004.

But over the last thirteen years, there have been many changes in the industry and in the Test Management community. I’d like to talk around some of these changes and explain why we decided to bring the TMF up to date with an industry that is far different from that in the early days of the Forum.

In 2004, the Agile movement was still in the early stages. For the next ten years or so, we ran many sessions that explored the changing (sometimes disappearing) role of test managers. I recall having many conversations with people who had been ‘Agiled’ and were concerned their role and contribution to Agile projects was uncertain.

The Test Managers I knew personally reacted in very different ways. Some managers and leaders became testers again, some specialised in test automation or security testing or usability. Others moved closer to their business, became business analysts or consultants. Quite a few were promoted out of testing altogether to become project managers, development leads or heads of solution delivery. Some senior managers retired from IT altogether.

Agile had a big impact, but approaches such as continuous delivery, DevOps, shift- left, shift-right, shift-wherever and analytics are changing the roles of testers in fundamental ways, and I should say, usually for the better. Some testers are falling by the wayside; getting out of testing. The pressures on Test Managers continue, but new doors are opening and opportunities emerging. Some test managers become Scrum Masters, more are finding new roles which I have labelled as Test Masters and in general, these people are performing an Assurance role.

Assurance (as distinct from Quality Assurance) is very much focused on delivery. The assurance role supports the delivers team by acting, at various times, as a process consultant, as a testing expert, as a reviewer, as a project-board level advisor and sometimes as an auditor. This variety of roles is more senior, more comprehensive and more influential than the traditional test manager role. The range of skills and authority required are beyond many test managers – it is not for everybody. But Assurance is a more senior role with a broader influence and is a natural advancement for aspiring test professionals.

With this background, we have decided to re-brand the Test Management Forum to be the Assurance Leadership Forum (or ALF) from April 2017. The ALF has very much the same goals as the TMF, but with a broader organisational, managerial and delivery-focused remit. Test leadership and technical testing issues will figure prominently in the ALF, but the range of topic areas will increase, with an alignment to the aspirations and career progression of senior practitioners.

Mike Jarred, of the Financial Conduct Authority has helped behind the scenes with the Forum for some time. Mike is also one of those managers with a testing background who has advanced beyond testing and into broader software delivery management. He is well placed to take over the Programme Management of the Forum. In December, I moved house to Macclesfield in Cheshire, making it harder for me to host and programme the London-focused Forum events. Mike offered to take the programme role and I gladly took up his offer.

I will continue to be the ‘host’ of the Forum and of course support Mike in putting the events together. My more focused role will allow me to take a more active part in the discussions themselves, for a change! I’ll be asking for expressions of interest in setting up a Northern Forum in the Summer before too long, I'm sure.

In the meantime, I hope you will support the Assurance Leadership Forum, and Mike in leading it.

The existing uktmf.com domain will continue to point to the Forum website, but a new domain ukalf.com points to the same content and will be our preferred url from now on. The LinkedIn group will be also renamed shortly.

Over the next few weeks, we’ll be rewriting some of the content of the website to better define the purpose of the ALF. I expect we’ll have a debate on possible directions in the next Summit. The First ALF Summit will take place on 25 April in London and if you want to contribute to the day, do let us know.



Tags: #TMF #ALF

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account

First published 20/12/2012

After the Next Generation Testing event in December 2012, Unicom asked me to summarise the thinking behind the proposal for the redistribution of testing. Four minutes 11 seconds. :O)



Tags: #BDD #redistribution #tdd

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account

First published 12/04/2012

 

In this approach, the technique involves taking a requirement and identifying the feature(s) it describes. For each feature, a story summary and a series of scenarios are created and these are used to feedback examples to stakeholders. In a very crude way, you could regard the walkthrough of scenarios and examples as a ‘paper-based’ unit or component test of each feature.

What do you mean ‘unit or component test’?

A story identifies a feature and includes a set of examples that represent its required behaviour. It is not a formal component specification, but it summarizes the context in which a feature will be used and a set of business test cases. Taken in isolation, the story and scenarios, when compared with the requirement from which it is derived, provide a means of validating the understanding of the requirement.

Rather than test the functionality of a feature (which is done later, by developers usually), the story and scenarios test the requirement itself. When anomalies are cleared up, what remains are a clarified requirement and a story that identifies a feature that provides a set of clarifying examples. The definition of the identified feature is clarified and validated, in isolation – just like a component can be tested and trusted, in isolation.

The scenarios are limited in scope to a single feature but taken together, a set of stories validates the overall consistency and completeness of a requirement with respect to the feature(s) it describes.

Creating Stories and Scenarios – DeFOSPAM

DeFOSPAM is the mnemonic the Business Story Method™ uses to summarise the seven steps used to create a set of stories and scenarios for a requirement that allow us to comprehensively validate our understanding of that requirement.
  • Definitions
  • Features
  • Outcomes
  • Scenarios
  • Prediction
  • Ambiguity
  • Missing
The number of features and stories created for a requirement are obviously dependent on the scope of a requirement. A 100 word requirement might describe a single system feature and a few scenarios might be sufficient. A requirement that spans several pages of text might describe multiple features and require many stories and tens of scenarios to fully describe. We recommend you try and keep it simple by splitting complex requirements.

D – Definitions

If agreement of terminology, or feature descriptions cannot be attained, perhaps this is a sign that stakeholders do not actually agree on other things? These could be the goals of the business, the methods or processes to be used by the business, the outputs of the project or the system features required. A simple terminology check may expose symptoms of serious flaws in the foundation of the project itself. How powerful is that?

On the one hand, the need for definitions of terms used in requirements and stories arises as they are written and should be picked up by the author as they write. However, it is not uncommon for authors to be blind to the need for definitions as they might be using language and terminology that is very familiar to them. Scanning requirements and stories to identify the terms and concepts that need definition or clarification by subject matter experts is critical.

Getting the terminology right is a very high priority. All subsequent communication and documentation may be tainted by poor or absent definitions. The stories and scenarios created to example requirements must obviously use the same terminology as the requirement so it is critical to gain agreement early on.

Identify your sources of definitions. These could be an agreed language dictionary, source texts (books, standards etc.) and a company glossary. The company glossary is likely to be incomplete or less precise than required. The purpose of the definitions activity is to identify the terms needing definition, to capture and agree terminology and to check that the language used in requirements and stories is consistent.

On first sight of a requirement text, underline the nouns and verbs and check that these refer to agreed terminology or that a definition of those terms is required.

  • What do the nouns and verbs actually mean? Highlight the source of definitions used. Note where definitions are absent or conflict.
  • Where a term is defined, ask stakeholders – is this the correct, agreed definition? Call these ‘verified terms’.
  • Propose definitions where no known definition exists. Mark them as ‘not verified by the business’. Provide a list of unverified terms to your stakeholders for them to refine and agree.
When you start the process of creating a glossary, progress will be slow. But as terms are defined and agreed, progress will accelerate rapidly. A glossary can be viewed as a simple list of definitions, but it can be much more powerful than that. It’s really important to view the glossary as a way of making requirements both more consistent and compact – and not treat glossary maintenance as just an administrative chore.
  • A definition can sometimes describe a complex business concept. Quite often in requirements documents, there is huge scope for misinterpretation of these concepts, and explanations of various facets of these concepts appear scattered throughout requirements documents. A good glossary makes for more compact requirements.
  • Glossary entries don’t have to be ‘just’ definitions of terminology. In some circumstances, business rules can be codified and defined in the glossary. A simple business rule could be the validation rule for a piece of business data, for example a product code. But it could be something much more complex, such as the rule for processing invoices and posting entries into a sales ledger.
Glossary entries that describe business rules might refer to features identified in the requirements elsewhere. The glossary (and index of usage of glossary entries) can therefore provide a cross-reference of where a rule is used and associated system feature is used.

F – Features – One Story per Feature

Users and business analysts usually think and document requirements in terms of features. A feature is something the proposed system needs to do for its user and helps the user to meet a goal or supports a critical step towards that goal.

Features play an important part in how business users, wishing to achieve some goal, think. When visualising what they want of a system, they naturally think of features. Their thoughts traverse some kind of workflow where they use different features at each step in the workflow. ‘… I’ll use the search screen to find my book, then I’ll add it to the shopping cart and then I’ll confirm the order and pay’.

Each of the phrases, ‘search screen’, ‘shopping cart’, ‘confirm the order’ and ‘pay’ sound like different features. Each could be implemented as a page on a web site perhaps, and often features are eventually implemented as screen transactions. But features could also be processes that the system undertakes without human intervention or unseen by the user. Examples would be periodic reports, automated notifications sent via email, postings to ledgers triggered by, but not seen by, user activity. Features are often invoked by users in sequence, but features can also collaborate to achieve some higher goal.

When reading a requirement and looking for features, sometimes the features are not well defined. In this case, the best thing is to create a story summary for each and move on to the scenarios to see how the stories develop.

Things to look for:

  • Named features – the users and analysts might have already decided what features they wish to see in the system. Examples could be ‘Order entry’, ‘Search Screen’, ‘Status Report’.
  • Phrases like, ‘the system will {verb} {object}’. Common verbs are capture, add, update, delete, process, authorise and so on. Object could be any entity the system manages or processes for example, customer, order, product, person, invoice and so on. Features are often named after these verb-object phrases.
  • Does the requirement describe a single large or complex feature or can distinct sub-features be identified? Obviously larger requirements are likely to have several features in scope.
  • Are the features you identify the same features used in a different context or are they actually distinct? For example, a feature used to create addresses might be used in several places such as adding people, organisations and customers.

O – One Scenario per Outcome

More than anything else, a requirement should identify and describe outcomes. An outcome is the required behaviour of the system when one or more situations or scenarios are encountered. We identify each outcome by looking for requirements statements that usually have two positive forms:
  • Active statements that suggest that, ‘…the system will…’
  • Passive statements that suggest that ‘…valid values will …’ or ‘…invalid values will be rejected…’ and so on.
Active statements tend to focus on behaviours that process data, complete transactions successfully and have positive outcomes. Passive statements tend mostly to deal with data or state information in the system.

There is also a negative form of requirement. In this case, the requirement might state, ‘…the system will not…’. What will the system not do? Usually, these requirements refer to situations where the system will not accept or proceed with invalid data or where a feature or a behaviour is prohibited or turned off, either by the user or the state of some data. In almost every case, these ‘negative requirements’ can be transformed into positive requirements, for example, ‘not accept’ could be worded as ‘reject’ or possibly even as ‘do nothing’.

You might list the outcomes that you can identify and use that list as a starting point for scenarios. Obviously, each unique outcome must be triggered by a different scenario. You know that there must be at least one scenario per outcome.

There are several types of outcome, of which some are observable but some are not.

Outputs might refer to web pages being displayed, query results being shown or printed, messages being shown or hard copy reports being produced. Outputs refer to behaviour that is directly observable through the user interface and result in human-readable content that is visible or available on some storage format or media (disk files or paper printouts).

Outcomes often relate to changes of state of the system or data in it (for example, updates in a database). Often, these outcomes are not observable through the user interface but can be exposed by looking into the database or system logs perhaps. Sometimes outcomes are messages or commands sent to other features, sub-systems or systems across technical interfaces.

Often an outcome that is not observable is accompanied by a message or display that informs the user what has happened. Bear in mind, that it is possible that an outcome or output can be ‘nothing’. Literally nothing happens. A typical example here would be the system’s reaction to a hacking attempt or selection of a disabled menu option/feature.

Things to look out for:

  • Words (usually verbs) associated with actions or consequences. Words like capture, update, add, delete, create, calculate, measure, count, save and so on.
  • Words (verbs and nouns) associated with output, results or presentation of information. Words like print, display, message, warning, notify and advise.

S – Scenarios – One Scenario per Requirements Decision

We need to capture scenarios for each decision or combination of decisions that we can associate with a feature.

Often, the most common or main success scenario is described in detail and might be called the ‘main success’ or ‘default’ scenario. (In the context of use cases, the ‘main success scenario’ is the normal case. Variations to this are called extensions or scenarios). The main success scenarios might also be called the normal case, the straight-through or plain-vanilla scenario. Other scenarios can represent the exceptions and variations.

Scenarios might be split into those which the system deals with and processes, and those which the system rejects because of invalid or unacceptable data or particular circumstances that do not allow the feature to perform its normal function. These might be referred to as negative, error, input validation or exception condition cases.

The requirement might present the business or technical rules that govern the use of input or stored data or the state of some aspect of the system. For example, a rule might state that a numeric value must lie within a range of values to be treated as valid or invalid, or the way a value is treated depends on which band(s) of values it lies in. These generalised rules might refer to non-numeric items of data being classified in various ways that are treated differently.

A scenario might refer to an item of data being treated differently, depending on its value. A validation check of input data would fall into this category (different error messages might be given depending on the value, perhaps). But it might also refer to a set of input and stored data values in combination. A number of statements describing the different valid and invalid combinations might be stated in text or even presented in a decision-table.

Things to look out for:

  • Phrases starting (or including) the words ‘if’, ‘or’, ‘when’, ‘else’, ‘either’, ‘alternatively’.
  • Look for statements of choice where alternatives are set out.
  • Where a scenario in the requirement describes numeric values and ranges, what scenarios (normal, extreme, edge and exceptional) should the feature be able to deal with?

P – Prediction

Each distinct scenario in a requirement setting out a situation that the feature must deal with, should also describe the required outcome associated with that scenario. The required outcome completes the definition of a scenario-behaviour statement. In some cases, the outcome is stated in the same sentence as the scenario. Sometimes a table of outcomes is presented, and the scenarios that trigger each outcome are presented in the same table.

A perfect requirement enables the reader to predict the behaviour of the system’s features in all circumstances. The rules defined in the requirements, because they generalise, should cover all of the circumstances (scenarios) that the feature must deal with. The outcome for each scenario will be predictable.

Now, of course, it is completely unrealistic to expect a requirement to predict the behaviour in all possible situations because most situations are either not applicable or apply to the system as a whole, rather than a single feature.

However, where scenarios are identifiable, the need must be to predict and associate an outcome with those scenarios.

When you consider the outcomes identified on the Outcomes stage, you might find it difficult to identify the conditions that cause them. Sometimes, outcomes are assumed or a default outcome may be stated but not associated with scenarios in the requirements text. These ‘hanging’ outcomes might be important but might never be implemented up by a developer. Unless, that is, you focus explicitly on finding these hanging outcomes.

Things to look out for:

  • Are all outcomes for the scenarios you have identified predictable from the text?
  • If you cannot predict an outcome try inventing your own outcomes – perhaps a realistic one and perhaps an absurd one and keep a note of these. The purpose of this is to force the stakeholder to make a choice and to provide clarification.

A – Ambiguity

The Definitions phase is intended to combat the use of ambiguous or undefined terminology. The other major area to be addressed is ambiguity in the language used to describe outcomes.

Ambiguity strikes in two places. Scenarios identified from different parts of the requirements appear to be identical but have different or undefined outcomes. Or two scenarios appear to have the same outcomes, but perhaps should have different outcomes to be sensible.

There are several possible anomalies to look out for:

  • Different outcomes imply different scenarios but it appears you can obtain the same outcome with multiple scenarios. Is something missing from the requirement?
  • It is possible to derive two different outcomes for the same scenario. The requirement is ambiguous.
In general, the easiest way to highlight these problems to stakeholders is to present the scenarios/outcome combinations as you see them and point out their inconsistency or duplication.

Look out for:

  • Different scenarios that appear to have identical outcomes but where common sense says they should differ.
  • Identical scenarios that have different outcomes.

M – Missing

If we have gone through all the previous steps and tabulated all of our glossary definitions, features, scenarios and corresponding outcomes we perform a simple set of checks as follows:
  • Are all terms, in particular nouns and verbs defined in the glossary?
  • Are there any features missing from our list that should be described in the requirements? For example, we have create, read and update features, but not delete feature
  • Are there scenarios missing? We have some but not all combinations of conditions identified in our table
  • Do we need more scenarios to adequately cover the functionality of a feature?
  • Are outcomes for all of our scenarios present and correct?
  • Are there any outcomes that are not on our list that we think should be?

Workshops

Story-driven requirement validation, being based on the DeFOSPAM checklist is easily managed in a workshop format in three phases. Requirements might be clustered or chunked into selected groups to be reviewed. It is helpful if all of the terms requiring definition, clarification or agreement are distributed ahead of the session, so the stakeholders responsible for the definitions can prepare ahead of the meeting.

At the workshop or review meeting, each requirement is considered in turn and the discussion of the stories derived from it is performed:

  • Firstly, consider the definitions to be captured and agreed. The group need to consider the definition of every term individually but also in the context of other related terms as a self-consistent group.
  • For each feature, the scenarios are considered one by one. Where there are suspected omissions, ambiguities, these are discussed and corrected as necessary.
  • The scenarios for a feature are considered as a set: have enough scenarios been documented to understand the requirement? Do they provide enough spread? Do they cover the critical situations?
  • The requirement, stories and scenarios are considered as a whole: is the requirement clear, complete and correct? Are all features identified and all requirements addressed or described by one or more stories? Can the requirements and stories be trusted to proceed to development?
Usually, discrepancies in requirements, definitions and stories can be resolved quickly in the meeting and agreed to.

The text of this post has been extracted from The Business Story Pocketbook written by Paul Gerrard and Susan Windsor.

Tags: #DeFOSPAM #businessstory #requirementsvalidation

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account

First published 02/12/2022

I’m leaving Twitter, for obvious (to me) reasons, which I'll explain. Essentially, I don't like what's happening to it.

Twitter, YouTube, personal blogs and Mastodon servers are full of accounts and opinions of the calamitous start to Elon Musk's first weeks at the helm of what he hopes will become Twitter 2.0. I've listed some links at the bottom of this post to back up the tale of woe. I don't like Musk, I don't like the sound of his politics, I don't like what he's doing to 'turn the company around'. I'm guessing he'll fail sooner rather than later.

Twitter was bought with borrowed money and now has a debt of around $13bn costing $1bn a year to service. It's revenues are plummeting as advertisers abandon the site. Half of the workforce of 7,500 has been fired (illegally), and whole departments have resigned, refusing to go 'hardcore' (including the HR payroll department, apparently). Of the 3750 or so of employees who were not fired, 75% did not respond to the 'click to be hardcore or be fired' email from Musk.

No one knows how many of the original 7,500 employees are still at the company. It could be just a few hundred who remain. It seems likely that many more people remain at the company because there's no HR staff to terminate these employees. And so on.

What does this mean for Twitter?

The general view expressed by experts, Twitter-watchers and ex-employees is that when (not if) Twitter has some infrastructure failures, there may not be enough (or any) people with the skills required to restore the service. Twitter will always have had hackers trying to penetrate the site, but, given the vulnerability of the service they'll be trying extra hard to bring it down. Forever.

It's when they deploy larger software or infrastructure upgrades when the fun will really start.

Musk has also admitted, if Twitter can't get it's revenues up it may have to file for bankruptcy. It really is that bad.

So, I'm leaving (not leaving) Twitter

I won't close my account because, you never know, maybe it'll turn a corner or something and become both viable and an attractive, friendly place to be. But I'm not holding my breath.

Introducing Mastodon

Mastodon seems to be the main game in town for people wishing to change platforms. Compared to Twitter, it is still small – around 8million accounts according to https://bitcoinhackers.org/@mastodonusercount – but growing fast as most people leaving the 'bird site' need a new home and land on the federated, decentralised service named after the ancient ancestor of elephants.

Lots of blogs and YouTube videos explaining what Mastodon is and how you use it have appeared over the last few weeks. You must choose and register on a specific server (often called an instance), but that server is one of (today) about 3,500. At the time of writing, around 200 servers are being added per day. You can think of a Mastodon server a bit like an email server. You have to toot (or post) rather than Tweet through your home server, but it can connect with every other Mastodon user or server on the planet. (Unless they are blacklisted – and some are).

I have set up a Mastodon Server

It's not as easy to find people you know, and of course, it's early days and most people aren't on Mastodon yet. But it's growing steadily as people join, experiment and learn how to use the service.

Mastodon accounts are like Twitter except, like email, you have to specify a service that you are registered with. For example, my Mastodon account is @paul@tester.social – a bit like an email address. Click on my address to see my profile.

I've been using Mastodon for about a month now, and I've found and followed Mastodon accounts of 70-75 testing-involved people I follow on the bird site. That's nearly a quarter of the 325 I follow (and I follow quite a few non-testing, jokey and celebrity accounts). So I'll risk saying, 25% or more tech-savvy people have made the move already. If you look at who the people you know follow, you will see names you recognise. It's not so hard to get started. And enthusiastic people who follow you on Twitter will be looking for you, when they join.

Why did I set up a Mastodon server?

Good question. On the one hand, I like to try new products and technologies – I have been running my own hardware at data centres since the early 2000s. At one point I had a half rack and eleven servers. Nowadays, I have three larger servers hosting about twenty virtual machines. If you want to know more about my set up, drop me a line or comment below.

I've hosted mail and web servers, Drupal and Wordpress sites, and experimented with Python-based data science services, MQTT, lots of test tools, and for a while, I even ran a Bitcoin node to see what was involved. So I thought I'd have a play with Mastodon. I used this video to guide me through the installation process. A bit daunting but with open-source software, you have to invest time to figure things out that aren't explained in documentation or videos.

tester.social is hosted on a 4 cpu, 16Gb memory, 1 Tb data virtual machine and uses Cloudflare R2 as a cloud-based object store for media uploads etc. From what I've seen of other site setups it would easily support 10,000 or more registered users. But, I'm going to monitor it closely to see how it copes of course.

It's an experiment, but one we will support for the next year at least. If it takes off and we get lots of users, we may have to think carefully about hiring technical and moderation staff and/or limiting registrations. But that's a long way off for now.

If you want to join tester.social, please do but...

Be aware that when you register, you will be asked to explain in a few words what you want to see on the site, and what you'll be posting about. Your account will be reviewed and you'll get an email providing a confirmation and welcome message. This is purely to dissuade and filter out phoney accounts and bots.

We will commit to the Mastodon Server Covenant and hopefully be registered with the Mastodon server community which today numbers just over 3,000. Nearly 2,000 servers have been set up since Musk took over the bird site. Mastodon is growing rapidly.

I don't know anyone on Mastodon, how do I find people I know?

If you are on Twitter, keep an eye out for people you know announcing their move to Mastodon – follow them, and see who they follow. And follow them. And see who they follow that you know and follow them and...

If you are not on Twitter, follow me. See who I follow and follow those you know. You'll get the hang of it pretty quickly.

I don't want to leave twitter yet, but want to experiment

Firstly, you can join any Mastodon service and use a cross-poster to copy your toots to tweets and vice versa. All new post on either service will be mirrored to the other. I found this article useful to learn what help is out there to migrate from Twitter to Mastodon: https://www.ianbrown.tech/2022/11/03/the-great-twitter-migration/.

For example, there are tools to scan your Twitter follows and followers for Mastodon handles, and you can import these lists to get you started. I used Movetodon to follow all my Twitter follows who had Mastodon accounts. Over time, I'll use it again and again to catch people who move in the future.

I registered with https://moa.party/ – it synchronises my posts on tester.social and Twitter in both directions – so I only have to post once. I post on tester.social and the tweet that appears on bird site isn't marked as 'posted by a bot' – so that works for me.

I found Martin Fowler's Exploring Mastodon blog posts on the subject very useful. He talks about his first month of using the service. Which is where I am at the moment. Sort of.

A few FAQs answered. Sort of

What if I don't like the service I register with or prefer another service? You can always transfer your account to another Mastodon service. Your follows and followers will be transferred and your posts, blocks and so on. You have to create a new account on the target service and may have to change your account name if your current name is taken on the new service. A redirect from old to new account is part of the service.

I hate advertising – can I avoid it? Mastodon servers do not display advertisements. Of course, companies might post commercials, but if you complain, it might be taken down and the poster be advised to leave the service and blocked in extremis.

What are toots and boosts? A toot is equivalent to a Tweet, and a boost is the same as a Retweet.

Are there apps for Mastodon? Yes of course – you can see the IOS and Android apps here. there are also a number of 3rd party apps. Don't ask me what they do – I haven't explored.

Why is the web interface better than apps? If you use the web interface, you can turn on what is called Advanced Web Interface in your preferences. In this version, you can view and pin multiple columns at the same time – each column having your toot window, home timeline, local timeline, federated timeline in parallel. You can set up what appears in each column in your preferences.

What are toot window, home timeline, local timeline, federated timeline? The toot window is where you post new messages. The three timelines are:

Home: like on Twitter, it shows all the posts of all the people you follow on all Instances
Local: it shows all the posts of the members of your Instance
Federated: it shows all the posts of the members of your Instance and also the posts of people on other Instances that are followed by people of your Instance.

If you want more help – and I'm sure you do, try this site: https://mastodon.help/ – it seems to cover all of the basics and quite a lot of the advanced stuff too.

If you join tester.social and need help; have a question?

If you need help mention @questions in a post and it'll reach us. We'll try to answer as soon as we can.

References

https://www.wsj.com/articles/how-elon-musks-twitter-faces-mountain-of-debt-falling-revenue-and-surging-costs-11669042132

https://www.bloomberg.com/news/articles/2022-11-10/musk-tells-twitter-staff-social-network-s-bankruptcy-is-possible

https://www.cipp.org.uk/resources/news/twitter-s-payroll-department-walks-out.html

https://www.nytimes.com/2022/11/18/technology/elon-musk-twitter-workers-quit.html

https://www.theguardian.com/technology/2022/nov/17/elon-musk-twitter-closes-offices-loyalty-oath-resignations

https://social.network.europa.eu/@EU_Commission/109437986950114434 (Twitter does not meet EU laws)



Tags: #mastodon #tester.social #twexit #SocialMedia #Mastodon #Twitter #socialmedia #twitter

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account

First published 23/12/2013

*** See our 'Test Strategy in a Day' workshop ***

Test Strategy is a Thought Process, Not a Document

Test strategy is one of those nebulous terms for an activity that everyone has to undertake for their development projects. The problem with test strategy is that there isn't a single agreed definition of what it is. Some people treat Test Strategy as a document: "A high-level description of the test levels to be performed and the testing within those levels for an organization or programme". So - a strategy could range from one to hundreds of pages of text. All we need is a document template!

Not quite.

Over the last twenty years, almost all of the Test Strategy documents Paul has reviewed have been "copy and edits" of documents for previous projects. All that changed was the names and the dates. And all of these document were read by … no one.

We suggest that rather than being a document, test strategy is a thought process. The outcome of the thinking might be a short or a long document, but really, the strategy must address the needs of the participants inside the project as well as the customers of the product to be built. It needs to be appropriate to a short agile project or to a 1000 man-year development. It has to have the buy-in of stakeholders but most importantly, it must have value and be communicated. That's quite an ambitious goal. but we think it is achievable.

A Strategy Answers Questions

Before your organisation, your team or you as an individual test a system, you need some questions answered. These questions are pretty fundamental and yet, we have met many teams who could not answer them - even when they were in the thick of the testing. Here is a sample of the type of questions to be answered?

  • Who are your testing stakeholders? What do they want from testing? Why do they want it? In what format? How often?
  • What is the scope of testing? What is out of scope? How will you manage scope?
  • How much testing is enough? What models should be used to guide the testing? How will you assess coverage?
  • and so on...

One way of looking at strategy is to regard it as a way of answering these questions. But not every question can be answered up-front. Some questions cannot be answered now, but perhaps later when information come to light. So the strategy must provide guidance for answering these questions and making decisions. It does this in three ways:

  1. Some decisions can be made now, directly and with confidence.
  2. Some decisions cannot be made now, but the strategy will provide a process, method or mechanism for reaching a decision.
  3. Some decisions cannot be made now, and no method could be formulated, but the strategy can provide some guiding principles to be followed in reaching a decision.

In this way, a strategy written early in a project, before all the information required is available, can flex and support a project that is heading into unknown territory. 

Is Agile Test Strategy an oxymoron? We don't think so. If you look at the three levels of decision making above, structured or waterfall projects aim to answer every questin ahead of time. So, in these projects, most of the quesitons can be directly answered so the emphasis is on 'type 1' decisions. In Agile or less certain projects, the lack of knowledge, or the need to remain flexible demands that the emphasis will shift more towards 'type 2' or 'type 3' decisions.

The Trick is to Ask the Right Questions

The challenge in test strategy is not providing perfect answers to these questions. The secret to success is in asking the right questions in the first place. Over twenty years, we have found oruselves asking the same questions again and again. The Test Axioms are an attempt to arrange these questions into a more usable structure. The Axioms represent context-neutral aspects of testing to think about. The 16 axioms are organised into three Axiom groups: Stakeholder, Design and Delivery. Each Axiom has 5-10 questions associated with the, the Axioms therefore provide a checklist of questions to ask, a a structure for your strategy document, should you coose to write one.

Our Test Strategy workshop

We have created a workshop titled 'Test Strategy in a Day' that you might find useful. In this workshop, we present a practical definition of a Test Strategy, provide a simple template for creating one and describe a systematic approach to thinking the right way. It's an interactive session where we invite you to bring your test strategy problems with you - we'll try and address them during the day.

  • Test strategy is a thought process, not a document
  • The goal is to acquire and communicate knowledge of achievement
  • How a valuable test strategy can be formulated and communicated

 

To view the workshop brochure click here.



Tags: #testaxioms #TestStrategy #AgileTestStrategy

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account

First published 15/07/2013

See below the four presentations given by Paul at the World Conference on Next Generation Testing held in Bangalore, India between 8th and 12th July 2013.

Tags: #nextgentesting

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account