Paul Gerrard

My experiences, opinions in the Test Engineering business. I am republishing/rewriting old blogs from time to time.

First published 14 April 2015

Why Write this Blog?

Last week there was a flurry of tweets related to the perceived lack of female keynote speakers at a 'well-known software testing conference'. Various views were expressed:

  • No, there weren't enough women keynotes on the conference circuit.
  • Perhaps we should have all-women conferences?
  • Perhaps we have a women keynote quota system for conferences?
  • The http://speaking-easy.com/ website aims to connect women wanting coaching in conference speaking to get into the apparently closed circuit of male conference and keynote speakers.
  • And so on.

It was suggested that perhaps a quota system, mandating a minimum number of women speakers on any programme be applied. Personally, I'm reluctant to back that approach. It would undermine the efforts of women who get onto that same programme through their effort and on merit. I believe that for a conference to have credibility to its delegates and the speakers themselves, it has to be a level playing field.

So that's that then. Won't women always be in the minority when it comes to conference appearances? I don't think so. I see no reason why there can't be more women speakers and keynote speakers at conferences. I see no reason why some less-than-inspiring male speakers can't be displaced by more inspiring women speakers.

Is success in becoming an in-demand keynote speaker based on luck, or favouritism or is it one of those black arts where the secrets are protected by a small guild of people with funny handshakes? No, I have to tell you that the secret to success is mostly common sense – with some talent and hard work thrown into the mix.

I want to share what I know about being an in-demand keynote. It works for me. Perhaps it will work for you (whatever your gender).

Most Keynote Speakers are Uninspiring

By and large, the performances of keynote speakers – at least half in my opinion – are poor. 'Surely not!', you might say. 'Aren't these men (usually) the cream of the speaking fraternity? Are they not thought leaders, gurus and inspirational speakers?' Perhaps in the mind of some programme chairs they are, but not usually in mine. At the outset, I must say that I am difficult to please. Even so, I think there are definite reasons for less than stellar performances from a lot of keynotes.

Some keynotes are there simply because they work for 'a large company'. Joe Soap from Amafacemicrogooglyblahblah Corporation. They are there because they have an impressive sounding title and work for a company who are making a move in a particular market (or are nowhere in a market and want to be). Perhaps they have something to say, but usually, it's nothing that can't already be found on their corporate website. Maybe their company is a sponsor. Maybe they work for a bank. Maybe the programme chair wants a job?

Then there are the speakers who are the self-promoters. Those who spend half an hour talking up the risks of not doing such and such a thing. Then they spend the rest of their slot espousing approaches, methods or tools that just happen to be part of their market offering. Tool vendors tend not to get keynote slots as they find it 'too hard' not to give sales pitches. But it seems that service and training companies aren't so prejudiced against by programme chairs. Perhaps they should be. Trainers and coaches, being the most practiced and well-positioned to promote themselves, are prominent on the conference keynote circuit.

Call me cynical. Maybe I am. But that's the last of the negativity.

Conference Realities

Every programme chair wants to have the very best talent on show in keynotes and their conference programme as a whole. It makes for an excellent conference and an enjoyable and informative experience for all of the delegates and speakers alike. But conferences are, in the main, businesses too. The organisers invest a lot of time and money through the year. Venues often need booking and demand substantial deposits, years in advance. So organisers tend to be quite risk-averse.

In the mind of the organisers, there is always a need to balance the need for the best, newest, most interesting speakers with the need to market and sell the conference to the world outside. People want to see at least some familiar faces on conference programmes and some 'big names' too. So there's a natural conservatism at work.

The exception to this conservatism is where a conference operates in a market or domain that is exploding. I went to the Mobile World Congress show in Barcelona last month. With 2,500 exhibitors and 92,000 visitors and (I am guessing) EUR 50-100m income or more. But they go for the big names and don't take many chances. Compared with the multitudes in the exhibition, I guess there were only a thousand or so people in the conference part of the show. The conference isn't their priority I suppose.

TED talks, either the larger international shows or the smaller provincial affairs are remarkably difficult shows to get into. You really do need to be a world-renowned speaker, politician, scientist, doer of good works, best-selling author or philanthropist. And yes, there are also some speaking opportunities at events that pay $100,000 for an appearance, but these are out of reach unless you are an ex-president, prime minister or major-winning golfer.

These are exceptional platforms for speakers. I want to concentrate on more mundane, more technical conference opportunities that pay modest fees to make it worthwhile for the speaker.

That's enough background – let's get down to business.

What you Need to be an In-Demand Keynote Speaker

I will share with you what I believe you need to do (and be) to be a keynote speaker in demand. Not the also ran, averagely dull keynote. No. You should aim to be one of the keynotes that people in the circles you move in know, respect and want on their conference programmes because you are both a good speaker and a 'draw'. Here's how.

Motivation

The first and most important attribute you need to be a keynote speaker is your desire to be one. That's a simple statement, and it's a bit more complex than that, but your motivation is the key.

This motivation to be a keynote is not the same as wanting to make money, promote your business or yourself. I'm afraid, if these are your motivations, most people will see through you. Not all will. You might have business partners, friends or acolytes who adore you, but these will still be a small minority, unless you frequent cliquey, isolated events or vendor shows.

No, the motivation has to be that you think you can actually do a better job than other people at speaking, or I could say, communicating or even (get this!) performing. When you sit back and listen to someone else's keynote talk, you get angry enough to both think and say, “I can do better than that. And I'll tell you why”.

In your “Keynote Capability Self-Assessment”, these are the boxes you need to tick.

You Have to be Driven

The first attribute you need is perhaps a certain type of arrogance. You must sincerely think that you can stand up in front of a thousand or more people and tell them what to do. You must honestly believe that you know better than they do (for why else would you be lecturing them in the first place?) I have said arrogance but that doesn't quite cut it. It's a combination of hutzpah, confidence, courage, belief and sheer bloody-mindedness. I can't quite put my finger on it, but you have to be willing and able to put yourself 'out there'.

When you're in the spotlight and someone in the faceless crowd challenges you with a piercing or even a stupid question, you have to want to reinforce your message, not just defend it. You must want to be, like in the children's' game, the 'King of the Castle'.

There are few things more toe-curlingly (or deliciously) embarrassing than a keynote who is caught out. It's a bit like those situations when a member of the public asks a politician a simple, direct question and the hapless politician has no credible response. This keynoting game – it's not for everyone.

You Must Have Something to Say

Of course I'm sure you have, but it is worth emphasising that if you are simply trotting out clichés, mantras, received wisdom, or low-risk restatements of common sense or even other well-regarded speakers' ideas, you might get away with this in a track talk, but not a keynote. Having said that, there is a place in all conferences for people who can communicate difficult concepts or can convince people to adopt what might appear to be simple things that are hard to achieve. But this puts you in the 'excellent communicator/motivator' bracket.

You must be a Good Communicator

Are you a good communicator? How do you know? Having a large vocabulary, using long words (when short ones will do) is not necessarily a good thing. Great communicators are not (usually) loudmouths, rabble rousers or soapbox orators. Look up a definition of communication skills and use that for reference. Suffice to say, I have seen fantastic talks from people who are absolutely not natural speakers.

TED talks are often good examples where the speakers are not slick, but their material and their natural, no frills delivery gets their message over beautifully. It's a really hard one to judge oneself, so I recommend you ask advice of friends or people who have heard you speak in the recent past. But, do not think you can wing it, or think that your poor skills are the fault of your stupid audience.

Know Your Audience

Having something interesting or useful to say is not your decision. It is a matter of other peoples' opinion. You might think you have pearls of wisdom to share, but your target audience might not. Fads, trends and revolutions come and go. Being on the bleeding edge of a new approach might suit your target audience. But it might also be too big a step for them to appreciate or even, most embarrassingly, they think of your talk as old hat or just inappropriate. You have to know your market, what they know and don't know, what their problems and aspirations are and what will go down well or like a lead balloon. Knowing your audience is at least important as motivation.

So, these are the personal attributes of you as a speaker that you must pay attention to.

Getting Your First Keynote – Catch 22

But how do you get selected to speak at conferences? The best product on the planet – you, in this case, will never sell itself without some marketing. This is where most people fall down. Probably the most important aspect here, is experience. Most programme chairs want experienced keynotes, not first timers. First timers pose a huge risk in the eyes of organisers and programme chairs. That's another stopper then.

How to get that first keynote without previous experience? How do I get experience if I can't get my first keynote?

There are some simple (note, I don't mean easy) ways of getting your keynoting career off the ground.

Create Your Own Conference; Hire Yourself

First, how about creating your own conference and putting yourself on the bill? Now this is not as arrogant or as crazy as it seems. If you want to talk about a new topic that you think is important, perhaps there is no conference in existence that focuses on the themes you are interested in. (If there is, you are already playing catch-up, aren't you?)

Perhaps there is no regional conference in your part of the world. Perhaps there's a local group that needs livening up. Offer to help organise and suggest that having a keynote speaker might sell the event more. Volunteer to be the first.

Organise a Conference and Network

In any circumstance, get yourself involved in a conference as an organiser, programme secretary or part of the selection or management team. Put this on your CV. Being part of a programme team means you'll have to find keynotes from time to time. Get in touch with other programme chairs, ask them for advice and names of other keynotes who have performed well for them. Make them aware that you are choosy and that you know a good keynote from a bad one. (You happen to be a keynote too). Become part of the 'keynote employers' circle – make people aware you know their business.

Never Create a Track Talk Again

Only pitch keynote talks. From now on, you only ever write keynote talks. If you have to propose using an online form, call your talk a keynote regardless of whether you are pitching for a keynote or track. Make the reviewers believe that your talk is a cut above the average proposal. You never know, if the chair needs a keynote they might pick yours.

Pitch Multiple Talks

I can hear conference committees groan right now...

Give the programme chair or committee a choice. If they want you on the programme at all, when they see a gap, your excellent proposal might just slot in. But very often, conference programmes get shuffled around quite dramatically before they are settled. Think of it like a game of cards. The more cards you have in the deck, the more likely your card will come off the top. Of course, you need to have the content for two or three excellent talks. Your number 1 talk might be great but suppose it clashes with someone else's talk? Give the chair the opportunity to select your number 2 talk.

Most importantly, never take an existing abstract, change a few buzzwords and pitch that as a second or third offering. If you did have a good talk to start with, it will be obvious what you've done and all your proposals will be tarred with the same brush as hacks – and will be rejected. You must put the same effort into all of your proposals. They must have variety. They must all be 'top-drawer'.

How Many Talks Should I Have?

Some speakers get away with creating one new talk every few years and bang it to death. Maybe they concentrate on writing terrific talks. I try and create two or three talks on new topics every year. For each of these topics, if someone chooses them, I create a tutorial from them. I usually pitch topics as both keynotes and tutorials of course. More cards in the game.

As some of your talks peak and seem to become less popular, you need other talks that are leading-edge, novel or topical that are advancing to replace them. Maybe you have three talks:

  • Last year's talk – reliable – some people will take it again
  • This year's talk – evolving – popular and seems to be this year's favourite
  • Your new talk – experimental – maybe run once or twice as a track so far.

Pay Close Attention to Calls for Papers

Target the conferences you want to speak at and diarise the key dates for each. Look out for where the call for papers will be announced. If you know who next year's chair is ask them what kind of keynotes might they look for – they might take note and get back to you later.

Before, or just as the call is published get in touch with the programme chair or someone on the programme committee. Ask them what would be a perfect pitch for a keynote that aligns with the theme or, what kind of topic would be a good 'off piste' talk they'd consider for the programme. They'll probably trot out the conference theme of course. But the key question to ask is, 'when will the keynotes be decided?' Make sure you send a well-written abstract for a killer keynote a week or two before this date. This might not be part of the standard call. But, having offered a talk, I would expect the chair to keep it – just in case.

Submit to Keynotes, Offer them as Tracks

Submit to keynotes, knowing that you'll probably only get a track talk. Make sure you add the phrase 'this is a keynote talk' in the abstract. When a programme team are discussing what's in and what's out, it's unusual for them to discount a talk until they have to. Programme teams have a hard job and in some ways are quite indecisive because they are somewhat democratic. So your talk is quite likely to have one advocate in a programme team. But it has to be a good abstract of course to catch the eye of your advocate.

Offer to be a Reserve

It seems obvious, but medium-sized conferences that won't get hundreds or thousands of offers of talks sometimes struggle to get enough good proposals. Offer to be a reserve track presenter, tutorial giver or keynote. If there's a gap in any programme, especially in the shows that you help with – volunteer. Make it known that you are happy to step in and save the day in a crisis.

At conferences you attend, remind the organiser or chair before the opening talk that, 'I always carry a spare talk on a USB stick with me'. It will be noticed. If there's a disaster and your talk fits, you might just get the call.

Also, because programme chairs often network to learn of new keynotes, if another programme chair enquires and wants a keynote who can step up in a hurry – you might be remembered.

In-House Conferences

Quite often, people who hear you speak and appreciate your talk ask you to come and present at one of their internal conferences or get-togethers. Never say no, unless the hassle is too great or they ask you to speak on a subject you are not expert in. As often as not, internally they will call you a keynote speaker – it makes you (and the organiser) sound more important – as you are of course. If this is your first keynote – no problem – do the talk, get it on your CV. You're on the move.

Never Say No

This might sound like a controversial suggestion. What I mean is never say no to an invite to speak at what you regard as a 'good' conference or one that won't cost you a lot in money or time. Always be open to go to new shows. The reason I say this, is if you are just getting started, get as much experience as you can.

Some years ago, I was invited to speak at the 'Northern {insert obscure Midwest US state here} Software Testing Association'. I'd never heard of it, and had to use an atlas to find the location. But I met some very smart and lovely people, they treated me very kindly and I got several fantastic stories that I've been using for years and years – in keynotes.

Rewards

Ah yes – the fantastic fees you get as a keynote speaker. Or not. The fees cover your expenses and a bit more besides – if you're lucky. To be paid to pontificate to a group of friendly peers is also very flattering. But I have to say, unless you are an ex-president, business or golfing icon you can't make enough money to live on.

Actually, to create a good keynote talk takes a considerable amount of time, and you find that you have to re-use experience acquired over many years of being a practitioner, researcher and dreamer. The fee rate isn't quite so attractive when you take all that time into account. In fact – the rate sucks.

If you aspire to be an 'in demand keynote', you have to settle for two main rewards. The adrenalin and energy boost of being on stage in front of your peers is one. The other is the occasional person saying something very complimentary about your talk.

It sometimes ends in tears.

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account

https://youtu.be/a61wKNUbDhY?si=cdv1HJhGk7gGuNub

Video Abstract

Does your working software sometimes stop working when changes are introduced?

Are your developers unable to impact analyze changes so unwanted side effects get released?

What measures are you taking to reduce the number of software regression errors?

Now, even though it's expensive and doesn't prevent regressions, most companies use system level regression testing.

You could say this is a last resort and in some ways the least effective anti regression measure we can take.

Let's look more closely at what regressions are, why they occur, and which anti regression measures are available to us.

Overview

I want to talk about software regressions and why regressions occur. If software regressions are the enemy, we want to prevent them as well as find them. Now, there are several options and we should consider all of them depending on our circumstances.

We need to know how regressions occur and why and take measures to prevent them as much as possible. So let's explore what a regression is.

What is a software regression?

One definition would be

“an unintended side effect or bug created or activated when changes are made to … something”

... and we’ll look what that something is.

There are several causes of software regressions and there are some variations of these too.

Causes of Regressions

Obviously code changes are a big concern. The most common cause of regressions is when developers modify existing code. Code changes often, unintentionally affect the behavior of other parts of the system.

But there are also environment changes that can cause problems.

Environment Changes

For example, hardware and operating system and other software upgrades can cause previously stable software to fail.

Updates or changes to third party libraries, APIs or services that your software relies on can introduce regressions.

These third parties could be partnering organizations or divisions in your own company.

Lack of Technical Understanding

When teams do not share adequate knowledge or understanding of the system's overall architecture, or if there is a lack of communication between different development teams, regressions are more likely to occur.

Older code bases usually lack clear documentation. All the experts may be long gone.

  • The knowledge and understanding of the original design choices are poor so architects, designers and developers make mistakes.

Code maintenance becomes very risky because no one has time to really analyze code to understand it.

And without that understanding, it becomes difficult to predict the impacts of change.

Developer Testing

It could be that development testing, if not thorough, can result missed regressions.

Developers are adopting better testing practices, including test first and TDD, but it's a slow process.

The big question is how can we avoid software regressions? There are several well established approaches.

More Effective Impact Analysis

The obvious one is to perform more effective impact analyses. But impact analysis is difficult and it's never going to be 100% reliable.

Requirements Impact Analysis

At the requirements level, we need to track requirements changes to understand the impact on other requirements.

Code Level Impact Analysis

At the code level, we have to trace code changes to understand the impact on other code.

Environment Impact Analysis

We should also evaluate impact of environmental changes on our systems too.

Now, all these measures sound great in principle. The problem is, they can be extremely difficult to apply in practice.

But there are other practices that help.

Anti-Regression Measures

Test-First Approaches

The first is test-first development.

Now, test-first implies the whole team think about testing before both new development and changes, whether due to requirements or bug reports.

Test-driven-development or TDD means developers write tests before writing code, and when done properly, means software changes incrementally in an always-tested state.

In continuous delivery environments, TDD is easier to apply and very effective at reducing regressions in later test stages.

We should not forget that test-first includes testing requirements and testing requirements is a powerful approach.

For example, if you write gherkin stories, creating feature scenarios not only helps testing, it can help the whole team to recognise and understand impacts.

Tracing language and feature changes across requirements gives some insight into impact too.

The use and reuse of data across requirements can point you in the right direction to find other impacts.

CI/CD Disciplines

Continuous Integration Continuous Deployment – CICD – pipelines allow automated testing to run every time new code is pushed to the code base.

Continuous approaches extend the test-first concept. Test first becomes test ALWAYS.

This is why continuous delivery helps identify issues early to keep the software in a deployable state.

Code Review

Regular code reviews can help catch potential problems and prevent regressions because code changes are critically examined from new perspectives.

Now, tools can scan code in isolation, but developers and architects – humans – can look at code in the context of interfacing components too.

In this way, interfaces and collaborating components are examined more closely to find inconsistencies and impacts.

Refactoring

Regular refactoring improves code readability, maintainability and developer understanding and this reduces regressions too.

Refactoring is an essential stage of test driven development.

The TDD mantra is RED, GREEN, REFACTOR in all TDD cycles.

Refactoring is a critical stage in the TDD cycle. Refactoring should not be an afterthought, but refactoring is too often neglected if time is tight.

Version Control Discipline

Good version control practices eliminate some types of regressions.

Developers are well used to version control tools such as GIT. But version control in continuous and DevOps environments requires discipline by BOTH developers and software managers.

Good version control practices not only reduce regressions, but also version control tools are an invaluable aid to tracing the troublesome code that causes a regression failures.

Feature Flags

Feature flags allow you to enable or disable specific features in code dynamically.

In test or production, new or changed features can be released to a specific environment or selected users.

If there are problems, the features can be withdrawn or turned off.

This doesn’t reduce regression, but it can reduce the impact of regression failures.

So, with care, we can extend some testing into production.

Test Coverage Thorough the Life-cycle

Finally, comprehensive test coverage helps. If your entire system is covered by tests at all stages, if these tests are repeatable, and repeated, this has to help the battle against regressions.

But this is an unachievable ideal in many situations.

Continuous delivery and test approaches are probably the least painful way of making progress.

System and User Regression Testing

System end to end and user tests are a different matter. The problem is, they are often too slow and too late to keep pace with continuous development cycles.

Summary

In summary, by adopting a mix of these measures, you could significantly reduce the chances of regressions.

They definitely help to ensure new updates and changes improve the software without breaking existing functionality.

But there's no prizes for guessing the elephant in the room.

Automated Regression Testing

What is the role of automated system and user regression testing?

Automated regression testing is a very effective way to catch regressions after they occur, but they are not a prevention measure, unless tests are part of a TDD cycle.

System and user regression tests are at best, a partial safety net and may help, when other measures fail.

The problem is that often, impact analysis in particular, is difficult and can be expensive. So regression prevention is not considered to be economic – it’s not looked at closely enough or acted upon.

For many years, late regression testing is the only approach used to prevent regressions reaching end-users.

As a consequence, the test automation tool market has grown dramatically.

The problem is these tools support a costly, less effective approach to regression prevention.

Late regression testing is a last resort, but test execution tools are the go-to solution to regression problems.

We'll talk more about automated regression testing in the next video.

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account

14 Nov 2023

Introduction

A long time ago in a place far far away ... I got involved with an initiative to create a software testing standard eventually published as British Standard 7925.

You can download the Draft Standard(s) from here.

These are some recollections of the years 1993 to 1998. I make no promise to get any of these memories 100% correct. (I haven't consulted any of the other participants in the story; I am writing this essay on impulse). They are mostly impressions, and impressions can be wrong. But I was there and involved at least.

As the Standard explains, this initiative started in January 1989 with a Standards Working Party (SWP) formed from members of the Specialist Interest Group in Software Testing (SIGIST, now BCS SIGIST). I don't know the detail of what actually happened but, in July 1992, a draft document was produced, and after some use of it in a few places, no one seemed very happy with it.

After a period, some of the members of the SWP decided to have another attempt. There were invites to get others involved through the SIGIST network and newsletter, and I was one of the 'new intake' to the SWP.

A New Start

In January 1993, a reformed SWP re-started from scratch, but retained the existing document as a source – some of it might be reusable, but the scope, overall direction and structure of the document needed a re-think.

PA Consulting generously offered the SWP free use of one of their meeting rooms in a plush office in Victoria, London, and provided nice coffee and generous bowls of fresh fruit, I recall. The number of participants in the group varied over time, averaging 10-12, I guess. Our monthly meetings typically had 8-10 people involved.

My old friend, Stuart Reid of Cranfield University at the time, led the effort – he had experience of working with standards bodies before. Other members (who I would name if I could remember them all) worked for organisations including National Physical Laboratory, Safety-Critical Systems Club, IPL, Praxis. I was a consultant at Systeme Evolutif a UK testing services company – which became Gerrard Consulting some years later. And so on.

I recall I was in a minority – I was one of a small number of consultants working in middle of the road IT at the time, and not safety-critical or high integrity systems. We were sometimes at odds with the other SWP-ers, but we learnt a lot in the meantime. We got on pretty well.

Early Direction

The original goal of the SWP was to come up with consistent, useful definitions of the main test design techniques that had been described in books like Glenford J Myers' Art of Software Testing – 1st Edition and Boris Beizer's Software Test Techniques.

At the time, there were very few books on software testing, although there were academics who had published widely on (mostly) structural test techniques such as statement, branch/decision testing and more stringent code-based approaches, as well as functional techniques such as combinatorial-, logic-, state- and X-Machine-based test design.

The descriptions of these techniques were sometimes a little imprecise and in conflict at times. So our plan was to produce a set of consistent descriptions of these.

From the eventual standard:

“The most important attribute of this Standard is that it must be possible to say whether or not it has been followed in a particular case (i.e. it must be auditable). The Standard therefore also includes the concept of measuring testing which has been done for a component as well as the assessment of whether testing met defined targets.”

Now this might not have been at the forefront of our minds at the start. Surely, the goal of the standard is to improve the testing people do? At any rate, the people who knew about standards got their way.

It seemed obvious that we would have to define some kind of process to give a context to the activity of test design so we decided to look only at component-level testing and leave integration, system and acceptance testing to another standards efforts. We would set the path for the other test phase standards and leave it there. (Yes, we really thought for a while we could standardise ALL testing!)

Some Progress, with Difficulty

So, we limited the scope of the standard to unit, component, program, module, class testing. Of course we had to define what a component was first. Our first challenge. We came up with:

Component: “A minimal software item for which a separate specification is available”

and:

Component testing: “The testing of individual software components. After [IEEE]”

That was kind of easy enough.

After some debate, we settled on a structure for the standard. We would define a Component Test Process, introduce each functional and structural test technique in two dimensions: as a technique for test design and as a technique for measurement (i.e. coverage).

All we had to do now was write the content, didn't we?

Difficulties and the Glossary

I think at that point, all our work was still ahead of us. We made some progress on content but got into long and sometimes heated debates. The points of dispute were often minor, pedantic details. These were important, but in principle, should have been easy to resolve. But no. it seemed that, of the seven or eight people in the room, two or three were disgruntled most of the time. We took turns to compromise and be annoyed, I guess.

From my perspective, I thought mostly, we were arguing over differing interpretations of terms and concepts. We lacked an agreed set of definitions and that was a problem, for sure.

In a lull in proceedings, I made a proposal. I would take the existing content, import it into a MS Access database, write a little code, and scan it for all the one, two, three and four word phrases. I would pick out those phrases that looked like they needed a definition and present the list at the next meeting. It took me about a day to do this. There were about 10,000 phrases in our text. I picked out 200 or so to define, presented them and the group seemed content to agree definitions as and when the conversation used a term without one. This seemed to defuse the situation and we made good progress without the arguing.

This was probably my most significant contribution. The resulting Glossary became BS 7925-1 (small fanfare please).

https://drive.google.com/file/d/1dVufDi4ru5mU7jF_Fy-T4wkn7_s-o4G6/view?usp=drive_link

Standard First Page

Process, Damned Process

The process was not so hotly debated, but why damned? Some of the group thought it would be uncontroversial to define a process for Component Testing. But some of us (the middle-of-the-road brigade) had fears that a process, no matter how gently defined and proposed, could trigger negative responses from practitioners to reject the standard out of hand.

We adopted a minimalist approach – there is little detail in the process section except a sequential series of activities with multiple feedback loops. There is no mention of writing code or amending code to fix bugs. It just explains the sequence and possible cycles of the five test activities.

It caused upset all the same – you can't please all the people all the time, and so on. But see my views on it later.

Finishing the Drafts

I have to say, once the structure and scope of the of the Standard was defined, and the Glossary definition process underway, I stepped back somewhat. I'm not a good completer-finisher. I reviewed content but didn't add a lot of material to the final draft. The group did well to crank out a lot of the detail and iron out the wrinkles.

Submission to the British Standards Institute

Stuart managed the transition from our SWP to the standards body. Trusting that standards can be expensive to purchase, once published, we created a final draft and made it publicly available for free download. That final draft went to the BSI.

The SWP wrote the standard. The BSI appointed a committee (IST/15) to review and accept it.

Committees responsible

You can see the involved parties are the sort of august organisations who might implement a Component Test Standard.

Legacy

It's just over 25 years since the standard was published on 15 August, 1998.

In April 2001, I set up a website (http://testingstandards.co.uk) which hosted downloadable copies of the (SWP Draft) standards. The site was managed by some colleagues from the SWP and others interested in testing standards at the time. Some progress was made (in some non-functional testing) but that work kind of fizzled out after a few years. I keep the website running for sentimental reasons, I suppose. It is neglected, out of date and most of the external links don't work. but the download links above do work and will continue to do so.

BS 7925-1 and -2 have been absorbed into/replaced by ISO 29119. The more recent standard 29119 caused quite a stir and there was an organised resistance to it. I fully agree that standards should be freely available but the response to it from people who are not interested or believe in standards was rather extreme and often very unpleasant.

Let standards be, I say. If people who see value in them want to use them, let them do so. Free country and all that. I don't see value in large overarching standards myself, I must say, but I don't see them as the work of the devil.

Anyway.

BS 7925 – a Retrospective

BS 7925-1 – “Vocabulary” – provided a glossary of some 216 testing terms used in the Component Testing Standard.

Vocabulary

Skimming the glossary now, I think the majority of definitions are not so bad. Of course, they don't mention technology and some terms might benefit from a bit of refinement, but they are at least (and in contrast to other glossaries) consistent.

BS 7925-2 – “Component Testing” – provided a process and definitions of the test design techniques.

BS7925-2 Component Testing

Component Test process

The final draft, produced by the SWP and the eventual standard contained some guidelines for the process that place it squarely in the context of a waterfall or staged development approach. At the time Extreme Programming (1999) and the Agile Manifesto (2001) lay in the future.

Does this mean the process was unusable in a modern context? Read the process definition a little more closely and I think it could fit a modern approach quite well.

Component Test Process

Forget the 'document this' and 'document that' statements in the description. A company wide component test strategy is nonsense for an agile team with two developers. But shouldn't the devs agree some ground rules for testing before they commit to TDD or a Test-First approach? These and other statements of intent and convention could easily be captured in some comments in your test code.

Kent Beck (who wrote the seminal book on TDD) admits he didn't invent the test-first approach but discovered it in an ancient programming book in the late 1990s. Michael Bolton helpfully provides an example – perhaps the first(?) – description of Test-First as an approach here.

We designed the component test process to assume a test-first approach was in place and it is an iterative process too – the feedback loops allow for both. Test-First might mean writing the entirety of a test plan before writing the entirety of the component code. But it could just as easily refer to a single check exercising the next line of code to be added.

So I would argue that with very little compromise, it supports TDD and automated testing. Not that TDD is regarded as a 'complete' component test approach. Rather, that the TDD method could be used to help to achieve required levels of functional and structural coverage, using good tools (such as a unit test framework and code coverage analyser).

I also appreciate that for many developers it doesn't add anything to a process they might already know by heart.

Test Techniques

Regarding the technique definitions, pretty much they stand as a concise, and coherent set of descriptions. Annex B provides some good worked examples of the techniques in use. Of course, there are deeper/academic treatments of some, perhaps all of these techniques. For example, Kaner, Radmanabhan and Hoffman wrote the 470 page “Domain Testing Workbook” – a generalisation of the equivalence partitioning and boundary value techniques in 2003.

If you want a description of techniques that presents them in a way that emphasises their similarity as examples of the concept of model-based test design and measurement, you would do well to find a better summary.

In Conclusion

This blog has been a trip down a (fallible) memory lane describing events that occurred from 34 to 25 years ago.

I am not suggesting the standard be revived, reinstated, brought up to date, or ever used. But I wanted to emphasise two things in this essay:

The BS 7925 Standards effort was substantial – overall, it took nine years from idea to completion. The second, more productive phase, took about 4 years. I am guessing that effort was between 300 and 500 man days for the SWP alone. People gave their time in good faith and for no reward.

The challenges of programming and testing have not changed that much. The thinking, debate, ideals aspired to and compromises made in the late 1990s to create a component test standard are much the same.

Even so, do not think that creating a workable Component Test Standard for modern environments would be a simple task.

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account

Alan Julien sent me a Linkedin message asking me to consider his LinkedIn post and the many comments that resulted.

Being the lazy person I am, and seeing there were more than two hundred comments, I copied the post and as many comments I could get onscreen, and gave it to ChatGPT and asked for a summary. Here is that summary.


Summary of the Debate on Software Testing Terminology: Manual vs. Automated Testing

The discussion, initiated by Alan Julien, critiques the terms manual and automated testing as misrepresentative of the intellectual and strategic nature of software testing. Many industry professionals support the idea that these labels oversimplify the discipline and propose alternative terminology to better reflect the depth of software testing.

Key Themes in the Debate:

1. The Misconception of “Manual Testing”

  • Many argue that manual testing implies repetitive, low-skill work when, in reality, it involves critical thinking, analysis, investigation, and risk assessment.
  • Testers engage in exploratory testing, problem-solving, and strategic planning, making the term “manual” misleading.
  • Several professionals note that testing has never been purely “manual”—tools have always assisted testing efforts.

2. “Automated Testing” is Not Fully Automated

  • The term automated testing suggests that testing can run independently of human intervention, which is not accurate.
  • Automation requires human creativity, scripting, maintenance, and analysis to be effective.
  • Many argue that “automated testing” should more accurately be called automated test execution since testing itself is a cognitive task.

3. Historical Origins of the Terms

  • Some trace the distinction to early test automation tool vendors (such as Mercury and Segue) who promoted their products by contrasting automation with “manual” testing.
  • The terminology was commercially driven and stuck due to marketing, HR practices, and job descriptions.

4. The Perceived Value and Pay Disparity

  • Automation roles are often paid higher than those labeled as “manual testers,” reinforcing the misconception that automation is superior.
  • This devalues the role of exploratory, risk-based, and analytical testing, which is essential for identifying critical defects.

5. Alternative Terminology Proposed

  • Many contributors suggest renaming these categories to emphasize cognitive and strategic aspects rather than execution methods:
    • “Exploratory Testing” vs. “Scripted Testing”
    • “Human-led Testing” vs. “Tool-assisted Testing”
    • “Feature Testing” vs. “Regression Testing”
    • “Cognitive Testing” vs. “Autonomous Testing”
    • “Augmented Testing” (human + AI collaboration)
    • “Experiential Testing” vs. “Programmatic Testing”
  • Some propose removing the labels entirely and just calling it testing, allowing for a focus on methodology rather than execution style.

6. Impact of AI on the Terminology Debate

  • AI-driven testing blurs the line further, making distinctions between “manual” and “automated” even less relevant.
  • Some advocate for terms like AI-assisted testing, rather than maintaining old binaries.

Conclusion: The Need for Change

  • The consensus is that the terms manual and automated testing are outdated, misleading, and limit the profession.
  • Industry professionals advocate for a shift in language to better reflect the intellectual and strategic contributions of testers.
  • The challenge remains in changing ingrained industry terminology, but many believe it is a battle worth fighting.

This debate highlights a broader effort to ensure software testing is recognized as a skilled, intellectual profession rather than just an execution-based task.


I agree with the summary above and that the terms Manual V Automated aren’t helpful.

For some years – maybe fifteen – I have advocated we look at testing from two standpoints: – How we think about testing – How we choose to manage the logistics of testing

You could say thinking is the strategy and logistics are tactics, but I favour a purist attitude. That IF (and only if) we separate the thinking from the practical aspects of how we strategise, prioritise, design, select, hypothesise our tests, and review the outcomes of tests after the fact, we can deal with the logistics in a more sensible way.

To me, a test is designed by a human. (Maybe the human uses a tool, Word, Excel, code e.g. Python... to help). Now this test could be an idea or a 50 page document describing an immensely complex procedure to implement an end-to-end test in a very complex environment. I don’t care. The thought process (not the thoughts themselves) is the same – the content and scale can be very different, obviously.

Whether we execute a test with tools or ‘manually’ or by some other chosen piece of magic is irrelevant, if we consider the thought process as universal.

How we implement/execute a test – preparing environments, obtain/configure test data, run tests, validate results, analyse the outputs or results data, cleanup environments and so on are logistical choices we can make. We might do some of these tasks without tools or with tools, with tools performing some or all the logistics.

Some tasks can only be done 'manually' – that is, using our brains, pencil and paper, whiteboard, or other aids to capture our ideas, test cases even. Or we might keep all that information in our heads. Other tasks can only be performed with tools – every environment, application, stakeholder, goals and risk profiles are different, so we need to make choices on how we actually ‘make the tests happen’.

Some tests, for example APIs might be executed using a browser, typing URLs into the search bar, or code, or a dedicated tool. All these approached require technology. But the browser is simply the UI we use to access the APIs. Are these tests manual or atomated? It's a spectrum. And our actual approach is our choice. The manual v automated label blurs the situation. But it's logistics that are the issue, not testing as a whole.

So. I believe there is – at some level of abstraction, and with wide variation – a perspicacious thought process for all tests. The choices we make for logistics varies widely depending on our context. You might call this a ‘context-driven approach’. (I wouldn’t, as I believe all testing is context-driven). You might not ‘like’ some contexts or the approaches often chosen in those contexts. I don’t care – exploratory v scripted testing is partly a subjective, and partly a contractual and/or a cultural call (if you include contractual obligations, or organisational culture in your context. Which obviously, I believe you should).

I use the model to explain why, for example, test execution is not all testing (automated execution isn't the be all and end all). I use the model to explain the nuanced difference between improvised testing and pre-scripted testing. (All testing is exploratory is an assumption of the New Model). I use the model to highlight the importance of 'sources of knowledge', 'challenging requirements', 'test modelling' and other aspects of testing that are hard to explain in other ways.

Like all models, the New Model is wrong, but I believe, useful.

If you want to know more about the challenges of testing terminology and the New Model thought process – take a look at my videos here:

Intro to the Testing Glossary Project

The New Model for Testing explains thinking and logistics

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account

First published 20/09/2017

Revolution

If you are not working on a “Digital” project, the hype that surrounds the whole concept of Digital and that is bombarding business and IT professions appears off-putting to say the least. But it would be wrong to ignore it. The Digital Transformation programmes that many organisations are embarking on are affecting business across all industry and government sectors. There is no doubt that it also affects people in their daily lives.

That sounds like yet another hype-fuelled statement intended to get the attention. It is attention grabbing, but it’s also true. The scope of Digital[1] is growing to encompass the entirety of IT related disciplines and business that depends on it: that is – all business.

It is becoming clear that the scope and scale of Digital will include all the traditional IT of the past, but when fully realised it will include the following too:

  • The IoT– every device of interest or value in the world will become connected; sensors of all types and purpose will be connected – by the billion – to the internet.
  • Autonomous vehicles – cars, planes, ships, drones, buses will become commonplace in the next ten years or so. Each will be a “place on the move”, fully connected and communicating with its environment.
  • Our home, workplace, public and private spaces will be connected. Our mobile, portable or wearable devices will interact with their environment and each other – without human intervention.
  • Robots will take over more and more physical tasks and make some careers obsolete and humans redundant. Robots will clean the city, fight our wars and care for the elderly.
  • Software in the form of ‘bots’ will be our guardian angel and a constant irritant – notifying us of the latest offers and opportunities as we traverse our Smart Cities[2].
  • The systems we use will be increasingly intelligent, but AI won’t be limited to corporates. Voice control may well be the preferred user-interface on many devices in the home and our car.
  • The operations or ‘Digital Storm’ of commerce, government, medicine, the law and warfare will be transformed in the next few years. The lives of mid-21st century citizens could be very different from ours.

Motivation

Still not convinced that Digital will change the world we live in? The suggested scale of change is overwhelming. Why is this happening? Is it hype or is it truly the way the world is going?

The changes that are taking place really are significant because it appears that this decade – the 2010’s – are the point at which several technological and social milestones are being reached. This decade is witness to some tremendous human and technological achievements.

  1. One third of the world is connected; there are plans to connect the remaining two-thirds[3]
  2. The range of small devices that can be assembled into useful things has exploded. Their costs are plummeting.
  3. Local and low power networking technologies can connect these devices.
  4. Artificial Intelligencewhich has promised so much for so many years is finally delivering in the form of Machine Learning.
  5. Virtual and Augmented Reality-based systems are coming. Sony VR launched (13/10/2016) to over 1.8million people and Samsung VR starts at under $100.
  6. Robotics, drone technology and 3D printing are now viable and workable whilst falling in cost.
Almost all businesses have committed to transform themselves using these technological advances – at speed – and they are calling it Digital Transformation.

Ambition

If you talk to people working in leading/bleeding edge Digital projects, it is obvious that the ambition of these projects is unprecedented. The origin of these projects can be traced to some critical, but dateless assumptions being blown away. It’s easy to imagine some Digital expert convincing their client to do some blue-sky thinking for their latest and greatest project. “The rules of the game are changed” they might advise:
  • There need be no human intervention in the interactions of your prospects and customers and your systems[4].
  • Your sales and marketing messages can be created, sent to customers, followed up and changed almost instantly.
  • You have the full range of data from the smallest locale to global in all media formats at your disposal.
  • Autonomous drones, trucks and cars can transport products, materials and people.
  • Physical products need not be ordered, held in stock and delivered at all – 3D printing might remove those constraints.
  • And so on.

Systems of Systems and Ecosystems

According to NASA the Space Shuttle[5] – with 2.5 million parts and 230 miles of wire – is (or was) the most complex machine ever built by man. With about a billion parts, a Nimitz class supercarrier[6] is somewhat more complex. Of course, it comprises many, many machines that together comprise the super-complex system of systems – the modern aircraft carrier.

A supercarrier has hundreds of thousands of interconnected systems and with its crew of 5-6,000 people could be compared to an average town afloat. Once at sea, the floating town is completely isolated except for its radio communications with base and other ships.

The supercarrier is comparable to what people are now calling Smart Cities. Wikipedia suggests this definition[7]:

“A smart city is an urban development vision to integrate multiple information and communication technology (ICT) and IoT  solutions in a secure fashion to manage a city’s assets – the city’s assets include, but are not limited to, local departments' information systems, schools, libraries, transportation systems, hospitals, power plants, water supply networks, waste management, law enforcement, and other community services.”

The systems of a Smart City might not be as complex as those of an aircraft carrier, but in terms of scale, the number of nodes and endpoints within the system might be anything from a million to billions.

A smart city is not just bigger than an aircraft carrier – it also has the potential to be far more complex. The inhabitants and many of the systems move in the realm of the city and beyond. They move and interact with each other in unpredictable ways. On top of that, the inhabitants are not hand-picked like the military; crooks, spies and terrorists can usually come and go as they please.

Unlike a ship – isolated at sea, the smart city is extremely vulnerable to attack from individuals and unfriendly governments and is comparatively unprepared for attack.

But it’s even more complicated than that.

Nowadays, every individual carries their own mobile system – a phone at least – with them. Every car, bus and truck might be connected. Some will be driverless. Every trash can, streetlight, office building, power point, network access point is a Machine to Machine (M2M) component of a Digital Ecosystem which has been defined thus:

“A Digital Ecosystem is a distributed, adaptive, open socio-technical system with properties of self-organisation, scalability and sustainability inspired from natural ecosystems”[8].

Systems of Every Scale

The picture I’ve been painting has probably given you the impression that the Digital systems being now architected and built are all of terrifying scale. But my real point is this: The scale of Digital ranges from the trivial to the largest systems mankind has ever attempted to build.

The simplest system might be, for example, a home automation product – where you can control the heating, lighting, TV and other devices using a console, your mobile phone or office PC. The number of components or nodes might be ten to thirty. A medium complexity system might be a factory automation, monitoring and management system where the number of components could be several thousand. The number of nodes in a Smart City will run into the millions.

The range of systems we now deal with spans a few dozen to millions of nodes. In the past, a super-complex system might have hundreds of interconnected servers. Today, systems are now connected using services or microservices – provided by servers. In the future, every node on a network – even simple sensors – is a server of some kind and there could be millions of them.

Systems with Social Impact

It might seem obvious to you now, but there is no avoiding the fact that Digital systems almost certainly have a social impact on a few, many or all citizens who encounter them. There are potentially huge consequences for us all as systems become more integrated with each other and with the fabric of society.

The scary notion of Big Brother[9] is set to become a reality – systems that monitor our every move, our buying, browsing and social activities – already exist. Deep or Machine Learning algorithms generate suggestions of what to buy, where to shop, who to meet, when to pay bills. They are designed to push notifications to us minute by minute.

Law enforcement will be a key user of CCTV, traffic, people and asset movement and our behaviours. Their goal might be to prevent crime by identifying suspicious behaviour and controlling the movement of law enforcement agents to places of high risk. But these systems have the potential to infringe our civil liberties too.

The legal frameworks of all nations embarking on Digital futures are some way behind the technology and the vision of a Digital Future that some governments are now forming.

In the democratic states, civil liberties and the rules of law are very closely monitored and protected. In non-democratic or rogue states, there may be no limit to what might be done.

Ecosystems of Ecosystems

The span of Digital covers commerce, agriculture, health, government, the media in its various forms and the military; it will affect the care, travel, logistics, and manufacturing industries. There isn’t much that Digital won’t affect in one way or another.

A systems view does not do it justice – it seems more appropriate to consider Digital systems as ecosystems within ecosystems.

This text is derived from the first chapter of Paul's book, “Digital Assurance”. If you want a free copy of the book, you can request one here.

[1] From now on I’ll use the word Digital to represent Digital Transformation, Projects and the wide range of disciplines required in the ‘Digital World’.

[2] See for example, http://learn.hitachiconsulting.com/Engineering-the-New-Reality

[3] Internet.org is a Facebook-led organisation intending to bring the Internet to all humans on the planet.

[4] Referred to as ‘Autonomous Business Models’.

[5] http://spaceflight.nasa.gov/shuttle/upgrades/upgrades5.html

[6] http://science.howstuffworks.com/aircraft-carrier1.htm

[7] https://en.wikipedia.org/wiki/Smart_city

[8] https://en.wikipedia.org/wiki/Digital_ecosystem

[9] No, not the reality TV show. I mean the despotic leader of the totalitarian state, Oceania in George Orwell’s terrifying vision, “1984”.

Tags: #assurance #Digital #ALF #DigitalAssurance

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account

First published 10/06/2014

At the Unicom NextGen Testing show on 26th June, (http://www.next-generation-testing.com/) I'll be pitching some ideas for where the testing world is going – in around 15 minutes. I thought I'd lay the groundwork for that short talk with a blog that sets out what I've been working on for the past few months. These things might not all be new in the industry, but I think they will become increasingly important to the testing community.

There are four areas I've been working on in between travelling, conferences and teaching.

Testers and Programming

I've been promoting the idea of testers learning to write code (or at least to become more technical) for some time. In February this year, I wrote an article for the testing Planet: 'The Testers and Coding Debate: Can We Move on Now?' That suggested we 'move on' and those testers who want to should find learning code an advantage. It stirred up a lively debate so it seems that the debate is not yet over. No one is suggesting that learning how to write code should be compulsory, and no one is suggesting that testers become programmers.

My argument is this: for the investment of time and effort required, learning how to write some simple code in some language will give you a skill that you might be able to use to write your own tools, but more importantly, the confidence and vocabulary to have more insightful discussions with developers. Oh, and by the way, it will probably make you a better tester because you will have some insider knowledge on how programmers work (although some seem to disagree with that statement).

Anyway, I have taken the notion further and proposed a roadmap or framework for a programming training course for testers. Check this out: http://gerrardconsulting.com/?q=node/642

Lean Python

Now, my intention all along in the testers and programming debate was to see if I could create a Python programming course that would be of value for testers. I've been a Python programmer for about five years and believe that it really is the best language I've used for development and for testing. So, I discussed with my Tieturi friends in Finland, the possibility of running such a course in Helsinki and I eventually ran it in May.

In creating the materials, I initially thought I'd crank out a ton of powerpoint and some sample Python code and walk the class through examples. But I changed tack almost immediately. I decided to write a Python programming primer in the Pocketbook format and extract content from the book to create the course. I'd be left with a course and a book (that I could give away) to support it. But then almost immediately, I realised two things:

  • Firstly, it was obvious that to write such a short book, I'd have to ruthlessly de-scope much of the language and standard functions, libraries etc.
  • Second – it appeared that in all the Python programming I've done over the last five years, I only ever used a limited sub-set of the language anyway. Result!
And so, I only wrote about the features of the language that I had direct experience.

I have called this Lean Python and the link to the book website is here: http://www.apress.com/gb/book/9781484223840#otherversion=9781484223857

“Big Data” and Testing the Internet of Everything

Last year, I was very interested in the notion of Big Data and the impact it might have on testing and testers. I put together a webinar titled Big Data: What is it and why all the fuss? which you might find interesting. In the webinar, I mentioned something called Test Analytics. I got quite a few requests to explain more about this idea, so I wrote a longer article for Professional Tester Magazine to explain it. you can go to the PT website or you can download the article “Thinking Big: Introducing Test analytics” directly from here.

Now, it quickly occurred to me that I really did not know where all this Big Data was coming from. There were hints from here and there, but it subsequently became apparent that the real tidal wave that is coming is the Internet of Things (also modestly known as the Internet of Everything)

So I started looking into IoT and IoE and how we might possibly test it. I have just completed the second article in a series on Testing the Internet of Everything for the Tea Time with Testers magazine. In parallel with each article, I'm presenting a webinar to tell the story behind each article.

In the articles, I'm exploring what the IoT and IoE are and what we need to start thinking about. I approach this from the point of view of a society that embraces the technology. Then look more closely at the risks we face and finally how we as the IT community in general and the testing community in particular should respond. I'm hopeful that I'll get some kinf of IoE Test Strategy framework out of the exercise.

The first article in the series appears in the March-April edition of the magazine (downloadable here) and is titled, “The Internet of Everything – What is it and how will it affect you”.

There is a video of an accompanying webinar here: The Internet of Everything – What is it and how will it affect you.

A New Model of Testing

Over the past four years, since the 'testing is dead' meme, I've been saying that we need to rethink and re-distribute testing. Talks such as “Will the Test Leaders Stand Up?” are a call to arms. How to Eliminate Manual Feature Checking describes how we can perhaps eliminate, through Redistributed testing, the repetitive, boring and less effective manual feature checking.

It seems like the software development business is changing. It is 'Shifting Left' but this change is not being led by testers. The DevOps, Continuous Delivery, Behaviour-Driven Development advocates are winning their battles and testers may be left out in the cold.

Because the shift-left movement is gathering momentum, Big Data and the Internet of Everything are on the way, I now believe that we need a New Model of Testing. I'm working on this right now. I have presented draftsof the model to audiences in the UK, Finland, Poland and Romainia and the feedback has been really positive.

You can see a rather lengthy introduction to the idea on the EuroSTAR website here. The article is titled: The Pleasure of Exploring, Developing and Testing. I hope you find it interesting and useful. I will publish a blog with the New Model for Testing soon. Watch this space.

That's what's new intesting for me. What's new for you?

Tags: #Python #IOE #IOT #InternetofEverything #programmingfortesters #InternetofThings #NewModelTesting

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account

A question from Amanda in Louiville, Kentucky USA.

“What's the acceptable involvement of a QA analyst in the requirements process?  Is it acceptable to communicate with users or should the QA analyst work exclusively with the business team when interpreting requirements and filling gaps?

As testers, we sometimes must make dreaded assumptions and it often helps to have an awareness of the users' experiences and expectations.”

“Interesting question, Amanda. Firstly, I want to park the ‘acceptable’ part of your question. I’ll come back to it, I promise.

Let me suggest firstly, that collaboration and consensus between users, BAs, developers and testers is helpful in almost all circumstances. You may have heard the phrase ‘three amigos’ in Agile circles to describe user/BA, developer and tester collaboration. What Agile has reminded us of most strongly is that regular and rapid feedback is what keeps momentum going in knowledge based (i.e. software development) projects.

In collaborative teams, knowledge gets shared fast and ‘dreaded assumptions’ don’t turn into disasters. I can think of no circumstance where a tester should not be allowed to ask awkward questions relating to requirements like ‘did you really mean this...?’, ‘what happens if...?’, ‘Can you explain this anomaly?’, ‘If I assume this..., am I correct?’. Mostly, these questions can be prefaced with another.

‘Can I ask a stupid question?’ reduces the chance of a defensive or negative response. You get the idea, I’m sure.

Where there is uncertainty, people make assumptions unless they are encouraged to ask questions and challenge other people’s thinking – to get to the bottom of problems. If you (as a tester) make assumptions, it’s likely that your developers will too (and different assumptions, for sure). Needless to say the users, all along, may be assuming something entirely different. Assume makes an ass of u and me (heard that one before?)

So – collaboration is a very positive thing.

Now, you ask whether it is ‘acceptable’ for testers to talk direct to users. When might it not be “acceptable”? I can think of two situations at least. (There are probably more).

One would be where you as a tester work for a system supplier and the users and BAs work for your customer. Potentially, because of commercial/contractual constraints you might not be allowed to communicate directly. There is a risk (on both sides) that a private agreement between people who work for the supplier and customer might undermine or conflict with an existing contract. The formal channels of communication must be followed. It is a less efficient way of working, but sometimes you just have to abide with commercial rules. Large, government or high-integrity projects often follow this pattern.

Another situation may be this. The BA perceives their role to be the interface between end users and a software project team. No one is allowed to talk direct to users because private agreements can cause mayhem if only some parties are aware of them. The BA is accountable to users and the rest of the project team for changes to requirements. There may be good reasons for this, but if you all work for the same organisation what doesn’t help is a ‘middle man’ who adds no value but distorts (unknowingly, accidentally or deliberately) the question from a tester and the response from a user.

Now, a good (IMHO) BA would see it as perfectly natural to allow testers (and other project participants) to ask questions of users directly, but it is also reasonable for them to be present, to assess consequences, to facilitate discussion, to capture changed requirements and disseminate them. That’s pretty much their job. A tester asking awkward questions is teasing out value and reducing uncertainty – a good thing. Who would argue with that?

But some BAs feel they ‘own’ the relationship with users. They get terribly precious about it and feel threatened and get defensive if other people intervene. In this case, the ‘not acceptable’ situation arises. I have to say, this situation reflects a rather dysfunctional relationship, not a good one. It isn’t helpful, puts barriers in the way of collaboration, introduces noise and error into the flow of information, causes delays and causes uncertainty. All together a very bad thing!

Having said all that, with this rather long reply, I’ve overran some quota or other, I’m sure. The questions I would ask, ‘unacceptable to whom?’ and ‘why?’ Are BAs defending a sensible arrangement or are they being a pain in the assumption?”



Tags: #FAQ #BusinessAnalysis

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account

First published 01/06/2016

A recent study from the University of Oxford makes for interesting reading:

  • Over the next two decades, 47% of jobs in the US may be under threat.
  • 702 occupations are ranked in order of their probability of computerisation. Telemarketers are deemed most likely (99%), recreational therapists least likely at 0.28%. Computer programmers appear to be 48% likely to be replaced.

If programmers have a 50/50 chance of being replaced by robots, we should think seriously on how the same might happen to testers.

Machine Learning in testing is an intriguing prospect but not imminent. However, the next generation of testing tools will look a lot different from the ones we use today.

For the past thirty years or so we have placed emphasis on test automation and checking. In the New Model for Testing, I call this 'Applying'. We have paid much less attention to the other nine - yes, nine - test activities. As a consequence, we have simple robots to run tests, but nothing much to help us to create good tests for those robots to run. 

In this paper, I am attempting to identify the capabilities of the tools we need in the future.

The tools we use in testing today are limited by the approaches and processes we employ. Traditional testing is document-centric and aims to reuse plans as records of tester activity. That approach and many of our tools are stuck in the past. Bureaucratic test management tools have been one automation pillar (or millstone). The other pillar – test automation tools – derive from an obsession with the mechanical, purely technical execution activity and is bounded by an assertion that many vendors still promote – that testing is just bashing keys or touchscreens which tools can do just as well.

The pressure to modernise our approaches, to speed up testing and reduce the cost and dependency on less-skilled labour means we need some new ideas. I have suggested a refined approach using a Surveying metaphor. This metaphor enables us to think differently on how we use tools to support knowledge acquisition.

The Survey metaphor requires new collaborative tools that can capture information as it is gathered with little distraction or friction. But they can also prompt the user to ask questions, to document their thoughts, concerns, observations and ideas for tests. In this vision, automated tools get a new role – supportive of tester thinking, but not replacing it.

Your pair in the exploration and testing of systems might soon be a robot. Like a human partner, they will capture the knowledge you impart. Over time they will learn how to support and challenge you and help you to navigate through your exploration or Surveying activity. Eventually, your partner will suggest ideas that rival your own. But that is still some way off.

To download the full paper, go to the Tools Knowledge Base.



Tags: #testautomation #TestingTools #Robots #Bots

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account

First published 09/05/2013

Testing is Long Overdue for a Change

Rumours of the death of testing were greatly exaggerated, but even so, the changes we predict will be dramatic. My own company has been heralding the demise of the 'plain old functional tester' (POFT) for years and we’ve predicted both good and bad outcomes of the technological and economic change that is going on right now. Some time ago, I posted a blog, Testing is in a Mess where I suggested that there's complacency, self-delusion and over capacity in the testing business; there is too little agreement about what testing is, what it’s for or how it should be done.

But there are also some significant forces at play in the IT industry and I think the testing community, will be coming under extreme pressure. I summarise this change as ‘redistributed testing’: users, analysts, developers and testers will redistribute responsibility for testing by, wait for it, collaborating more effectively. Testers probably won’t drive this transition, and they may be caught out if they ignore the winds of change.

In this article, I’ll suggest what we need from the leaders in our industry, the market and our organisations. Of course, some responsibility will fall on your shoulders. Whether you are a manager or technical specialist, there will be an opportunity for you to lead the change.

New Architectures, new Approaches

Much of the software development activity in the next five years or so will be driven by the need for system users and service vendors to move to new business models based on new architectures. One reason SaaS is attractive is that the route to market is so simple that tiny boutique software shops can compete on the same playing field as the huge independent software vendors.

SaaS works as an enabler for very rapid deployment of new functionality and deployment onto a range of devices. A bright idea in marketing in the morning can be deployed as new functionality in the afternoon and an increasing number of companies are succeeding with ‘continuous delivery’. This is the promise of SaaS.

Most organisations will have to come to terms with the new architectures and a more streamlined approach to development. The push and pull of these forces will make you rethink how software available through the Internet is created, delivered and managed. The impacts on testing are significant. If you take an optimistic view, testing and the role of testers can perhaps, at last, mature to what they should be.

The Testing Business has Matured, but Bloated

Over the last twenty years or so there has been a dramatic growth in the number of people who test and call themselves testers and test managers. It’s not that more testing happens. I think it’s because the people who do it are now recruited into teams, having managers who plan, resource and control sizable budgets in software projects to perform project test stages. There is no question that people are much more willing to call themselves a tester. There are now a huge number of career testers across the globe; many have done nothing but testing in their professional lives. The problem is that there may now be too many of them.

In many ways, in promoting the testing discipline as some of us have done for more than twenty years, we have been too successful. There is now a sizable testing industry. We have certification schemes, but the schemes that were a step forwards fifteen years ago, haven’t advanced. As a consequence, there are many thousands of professional testers, certified only to a foundation level who have not developed their skills much beyond test script writing, execution and incident logging. Much of what these people do are basically ‘checking’ as Michael Bolton has called it.

Most checking could be automated and some could be avoided. In the meantime, we have seen (at last) developer testing begin to improve through their adoption of test-driven and behaviour-driven approaches. Of course, most of the testing they do is checking at a unit level. But this is similar to what many POFTs spend much of their time doing manually. Given that most companies are looking to save money, it’s easy to see why many organisations see an opportunity to reduce the number of POFTs if they get their developers to incorporate automated checking into their work through TDD and BDD approaches.

As the developers have adopted the disciplines and (mostly free) tools of TDD and BDD, the testers have not advanced so far. I would say, that test innovation tends to be focused on the testers’ struggle to keep pace with new technologies rather than insights and inventions that move the testers’ discipline forward. Most testing is still manual, and the automated tests created by test teams (usually with expensive, proprietary tools) might be better done by developers anyway.

In the test management space, one can argue that test management is a non-discipline, that is, there is no such thing as test management, there’s just management. If you take the management away from test management – what’s left? Mostly challenges in test logistics – or just logistics – and that’s just another management discipline isn’t it?

What about the fantastic advances in automation? Well, test execution robots are still, well, just robots. The advances in these have tracked the technologies used to build and deliver functionality – but pretty much that’s all. Today’s patterns of test automation are pretty much the same as those used twenty or more years ago. Free test automation frameworks are becoming more commonly used, especially for unit testing. Free BDD tools have emerged in the last few years, and these are still developer focused but expect them to mature in the next few years. Tools to perform end-to-end functional tests are still mostly proprietary, expensive and difficult to succeed with.

The test management tools that are out there are sophisticated, but they perform only the most basic record keeping. Most people still use Excel and survive without test management products that only support the clerical test activities and logistics and do little to support the intellectual effort of testers.

The test certification schemes have gone global. As Dorothy Graham says on her blog the Foundation met its main objective of “removing the bottom layer of ignorance” about software testing. Fifteen years and 150,000+ certificate awards later, it does no more than that. For many people, it seems that this ‘bottom layer of knowledge’ is all they may ever need to get a job in the industry. The industry has been dumbed-down.

Agile: a Stepping Stone to Continuous Delivery

There is an ongoing methodological shift from staged, structured projects to iterative and Agile and now, towards ‘continuous delivery’. Just as companies seem to be coming to terms with Agile – it’s all going to change again. They are now being invited to consider continuous ‘Specification by Example’ approaches. Specification by example promotes a continual process of specification, exampling, test-first, and continuous integration. CI and Delivery is the heartbeat, the test, life-support and early warning system. The demands for better testing in development are being met. A growing number of developers have known no other way. If this trend continues, we will get better, stable software sooner and much of the late functional checking done by system testers may not be required. Will this reduce the need for POFT testers? You bet.

But, continuous delivery is a machine that consumes requirements. For the rapid output of continuous delivery to be acceptable, the quality of requirement going into that machine must be very high. We argue that requirements must be trusted, but not perfect.

Testers are Being Squeezed

Developers are increasingly taking on the automated checking. Some business analysts are taking their chance and absorbing critical disciplines into analysis and are taking over the acceptance process too. Combined, the forces above are squeezing testers from the ‘low-value’ unskilled, downstream role. To survive, testers will have to up-skill to upstream, business-savvy, workflow-oriented, UX-aware testing specialists with new tools or specialise in automation, technical testing or become business domain experts.

So how do Testers take Advantage of Redistribution?

I set out my top 10 predictions for the next five years in my blog On the Redistribution of Testing and I won’t labour those points here. Rather, I’ll explore some leadership issues that arise from the pressures I mentioned above and potential shifts in the software development and more particularly, testing business.

The core of the redistribution idea is that the checking that occupies much of the time of testing teams (who usually get involved late in projects) can be better done by developers. Relieving the testers of this burden gives them time to get involved earlier and to improve the definition of software before it is built. Our proposal is that testers apply their critical skills to the creation of examples that illustrate the behaviour of software in use in the requirements phase. Examples (we use the term business stories) provide feedback to stakeholders and business analysts to validate business rules defined in requirements. The outcome of this is what we call trusted requirements.

In the Business Story Pocketbook, we define a trusted requirement as “… one that, at this moment in time, is believed to accurately represent the users’ need and is sufficiently detailed to be developed and tested.” Trusted requirements are specified collaboratively with stakeholders, business analysts, developers and testers involved.

Developers, on receipt of validated requirements and business stories can use the stories to drive their TDD approach. Some (if not all) of these automated checks form the bulk of regression tests that are implemented in a Continuous Integration regime. These checks can then be trusted to signal a broken build. As software evolves, requirements change; stories and automated checks change too. This approach, sometimes-called Specification by Example depends on accurate specifications (enforced by test automation) for the lifetime of the software product. Later (and fewer) system testers have reduced time to focus on the more subtle types of problem, end to end and user experience testing.

The deal is this: testers get involved earlier to create scenarios that validate requirements, and that developers can automate. Improving the quality of requirements means the target is more stable, developers produce better code, protected by regression tests. Test teams, relieved of much of the checking and re-testing are smaller and can concentrate on the more subtle aspects of testing.

With regards to the late testing in continuously delivering environments, testers are required to perform some form of ‘health check’ prior to deployment, but the days of teams spending weeks to do this are diminishing fast. We need fewer, much smarter testers working up-front and in the short time between deployment and release.

Where are the Opportunities?

The software development and Agile thought leaders are very forcefully arguing for continuous delivery, collaborative specification, better development practices (TDD, BDD), continuous integration, and testing in production using A/B testing, dark releases and analytics and big data. The stampede towards mobile computing continues apace and for organisations that have a web presence, the strategy is becoming clearer.

The pace of technical change is so high that the old way of testing just won’t cut it. Some teams are discovering they can deliver without testers at all. The challenge of testing is perceived (rightly or wrongly) to be one of speed and cost (even though it’s more subtle than that of course). Testers aren’t being asked to address this challenge because it seems more prone to a technical solution and POFTs are not technical.

But the opportunities are there: to get involved earlier in the requirements phase; to support developers in their testing and automation; to refocus testing away from manual checking towards exploratory testing; to report progress and achievement against business goals and risks, rather than test cases and bug reports.

Testers Need a New Mindset; so do Vendors

We need the testing thought-leaders to step up and describe how testing, if it truly is an information provision service, helps stakeholders and business analysts to create trusted requirements, support developers in creating meaningful, automatable, functional tests. And to be there at the end to perform the testing (in production, or production-like environments) to ensure there are no subtle flaws in the delivered system.

Some of the clichés of testing need to be swept away. The old thinking is no longer relevant and may be career limiting. To change will take some courage, persistence and leadership.

Developers write code; testers test because developers can’t: this mentality has got to go. Testing can no longer be thought of as distinct from development. The vast majority of checking can be implemented and managed by development. One potential role of a tester is to create functional tests for developers to implement. The developers, being fluent in test automation, implement lower level functional and structural tests using the same test automation. Where developers need coaching in test design, then testers should be prepared to provide it.

Testers don’t own testing: testing is part of everyone’s job from stakeholder, to users, to business analysts, developers and operations staff. The role of a tester could be that of ‘Testmaster’. A testmaster provides assurance that testing is done well through test strategy, coaching, mentoring and where appropriate, audit and review.

Testing doesn’t just apply to existing software, at the end: testing is an information provision service. Test activity and design is driven by a project’s need to measure achievement, to explore the capabilities, strengths and weaknesses so decisions can be made. The discipline of test applies to all artefacts of a project, from business plans, goals, risks, requirements and design. We coined the term ‘Project Intelligence’ some years ago to identify the information testers provide.

Testing is about measuring achievement, rather than quality: Testing has much more to say to stakeholders when its output describes achievement against some meaningful goal, than alignment to a fallible, out of date, untrusted document. The Agile community have learnt that demonstrating value is much more powerful than reporting test pass/fails. They haven’t figured how to do it of course, but the pressure to align Agile projects with business goals and risks is very pronounced.

Whither the Test Manager?

You are test manager or a test lead now. Where will you be in five years? In six months? It seems to me there are five broad choices for you to take (other than getting out of testing and IT altogether).
  1. Providing testing and assurance skills to business: moving up the food chain towards your stakeholders, your role could be to provide advice to business leaders wishing to take control of their IT projects. As an independent agent, you understand business concerns and communicate them to projects. You advise and cajole project leadership, review their performance and achievement and interpret outputs and advise your stakeholders.
  2. Managing Requirements knowledge: In this role, you take control of the knowledge required to define and build systems. Your critical skills demand clarity and precision in requirements and the examples that illustrate features in use. You help business and developers to decide when requirements can be trusted to the degree that software can reasonably be built and tested. You manage the requirements and glossary and dictionary of usage of business concepts and data items. You provide a business impact analysis service.
  3. Testmaster – Providing an assurance function to teams, projects and stakeholders: A similar role to 1 above – but for more Agile-oriented environments. You are a specialist test and assurance practitioner that keeps Agile projects honest. You work closely with on-site customers and product owners. You help projects to recognise and react to risk, coach and mentor the team and manage their testing activities and maybe do some testing yourself.
  4. Managing the information flow to/from the CI process: in a Specification by Example environment, if requirements are validated with business stories and these stories are used directly to generate automated tests which are run on a CI environment, the information flows between analysts, developers, testers and the CI system is critical. You define and oversee the processes used to manage the information flow between these key groups and the CI system that provides the control mechanism for change, testing and delivery.
  5. Managing outsourced/offshore teams: In this case, you relinquish your onsite test team and manage the transfer of work to an outsourced or offshore supplier. You are expert in the management of information flow to/from your software and testing suppliers. You manage the relationship with the outsourced test team, monitor their performance and assure the outputs and analyses from them.

Summary

The recent history and the current state of the testing business, the pressures that drive the testers out of testing and the pull of testing into development and analysis will force a dramatic re-distribution of test activity in some or perhaps most organisations.

Henry Kissinger said, “A leader does not deserve the name unless he is willing occasionally to stand alone”. You might have to stand alone for a while to get your view across. Dwight D Eisenhower gave this definition: “Leadership is the art of getting someone else to do something you want done because he wants to do it”.

Getting that someone else to want to do it might yet be your biggest challenge.



Tags: #futureoftesting #Leadership

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account