Paul Gerrard

My experiences in the Test Engineering business; opinions, definitions and occasional polemics. Many have been rewritten to restore their original content.

First published 18/07/2011

Got your attention? I'm triggered into posting what follows because Rob Lambert has published an 'ebook' The Problems of Testing. Now it reminded me of a blog I wrote 18 months or so ago – but never published. I never pressed the 'publish' button because I thought it was rather too negative. But Rob's frustration is shared and here's my take on it. I don't agree with all he says, but I think there's a certain amount of complacency, self-delusion and over capacity in our business. Here what I wrote ... I was going to call this post “Do you engage your brain before you test?” but as I wrote it, I got more and more upset by the state of our discipline. (I can hardly bring myself to call it a discipline any more). It seems to me that the major 'trends' occurring in our discipline are being promoted and adopted at a terrifying rate. I'm not worried about the rate of change, or being left behind. I am worried that the people who are being sold these approaches and implementing them are taking on so-called 'solutions' without understanding what the underlying problem is, what they are trying to achieve, or how to evaluate 'success'. People have no time and no frame of reference in which to think or consider the pros and cons of available courses of action. Often, they don't even know who they are testing for. For example:

  • tools are used to automate tests that have unknown or limited value
  • 'crowds' might help us test but what is the objective, where is the control or accountability?
  • test design techniques are promoted as 'good practice' without people understanding the concept of test models, their value and limitations
  • coverage is discussed endlessly without any understanding of it's meaning, subjectivity and interpretation
  • the language and terminology we use is riddled with duplication, inconsistency and ambiguity
  • outsourcing/offshoring are promoted as being effective and economic - without any discussion of its value
  • the exploratory/ad-hoc v planned/documented testing debate is a yah-boo-sucks schoolyard shouting match - how is an interested observer to understand the issues?
  • most debate is about software testing, but we test systems, don't we? Why don't we adopt a Systems Approach?
  • certification schemes are embedding many of these flawed ideas and are strangling competitive, viable and often better alternatives.

I could go on. If I suggested that 50% of the testers currently in our business shouldn't be – how would you argue against that? Why are YOU in this business? For example – can you string two words together? One of the reasons I wrote the TESTER'S POCKETBOOK was to drill down to the fundamentals that underpin all testing (if I could find them). It's been a struggle, but I've come up with some and they are useful. I call these fundamentals TEST AXIOMS and you can see them here. They make sense to me as a starting point for discussion on testing and test practices, but they also underpin much of the thinking about test strategy and improvement. Ask them of yourself or your organisation or the next person you interview. They work for me. Am I too pessimistic about the state of our industry? Is anything going well out there? If/as/when the economies of our various countries squeezes testers out of businesses – will you still be in a job? How will you justify your role? Developers, analysts and users combined can do most of what you do. Can you do their job? Who is indispensable now? Think about it.



Tags: #testing #testaxioms

Paul Gerrard My linkedin profile is here My Mastodon Account

First published 06/11/2009

Getting the requirements ‘right’ for a system is a pre-requisite for a successful software development, but getting requirements right is also one of the most difficult things to achieve. There are many difficulties to overcome in articulating, documenting and validating requirements for computer systems. Inspections, walkthroughs and Prototyping are the techniques most often used to test or refine requirements. However, in many circumstances, formal inspections are viewed as too expensive, walkthroughs as ineffective and Prototyping as too haphazard and uncontrolled to be relied on.

Users may not have a clear idea of what they want, and are unable to express requirements in a rational, systematic way to analysts. Analysts may not have a good grasp of the business issues (which will strongly influence the final acceptance of the system) and tend to concentrate on issues relevant to the designers of the system instead. Users are asked to review and accept requirements documents as the basis for development and final acceptance, but they are often unable to relate the requirements to the system they actually envisage. As a consequence, it is usually a leap of faith for the users when they sign off a requirements document.

This paper presents a method for decomposing requirements into system behaviours which can be packaged for use in inspections, walkthroughs and requirements animations. Although not a formal method, it is suggested that by putting some formality into the packaging of requirements, the cost of formal inspections can be reduced, effective walkthroughs can be conducted and inexpensive animations of requirements can be developed.

Registered users can download the paper from the link below. If you aren't registered, you can register here.

Tags: #testingrequirements

Paul Gerrard My linkedin profile is here My Mastodon Account

First published 07/12/2011

Its been interesting to me to watch over the last 10 or maybe 15 years the debate over whether exploratory or scripted testing is more effective. There's no doubt that one can explore more of a product in the time it takes for someone to follow a script. But then again – how much time exploratory testers lose spent bumbling around lost, aimlessly going over the same ground many times, hitting dead ends (because they have little or no domain or product knowledge to start with). Compare that with a tester who has lived with the product requirements as they have evolved over time. They may or may not be blinkered, but they are better informed – sort of.

I'm not going to decry the value of exploration or planned tests – both have great value. But I reckon people who think exploration is better than scripted under all circumstances have lost sight of a thing or two. And that phrase 'lost sight of a thing or two' is significant.

I'm reading Joseph T. Hallinan's book, “Why We Make Mistakes”. Very early on, in the first chapter no less Hallinan suggests, “we're built to quit”. It makes sense. So we are.

When humans are looking for something, smuggled explosives, tumors in x-rays, bugs in software – humans are adept at spotting what they look for – if, and it's a big if, these things are common – in which case they are pretty effective – spotting what they look for most of the time.

But, what if what they seek is relatively rare? Humans are predisposed to just give up the search prematurely. It's evolution stupid! Looking for, and not finding, food in one place just isn't sensible after a while. you need to move on.

Hallinan quotes (among others) the cases of people who look for PA-10 rifles in luggage in airports and tumours in xrays. In these cases, people look for things that rarely exist. In the case of radiologists, mammograms reveal tumours only 0.3 percent of the time. 99.7 percent of the time the searcher will not find what they look for.

In the case of guns or explosives in luggage the occurrence is rarer. In 2004, according to the thegunsource.com, 650 million passengers travvelled in the US by air. But only 598 firearms were found – about one in a million occurences.

Occupations that seek to find things that are rare have considerable error rates. The miss rate for radiogists looking for cancers is around 30%. In one study at the world famous Mayo clinic, 90% of the tumours missed by radiologists were visible in previous x-rays.

In 2008, from the UK I travelled to the US, to Holland and Ireland. On my third trip, returning from Ireland with the same rucksdack on my back at the security check at Dublin airport (i.e. my sixth flight), when going through security, I was called to one side by a security officer. A lock-knife with a 4.5 inch blade was found in my rucksack. Horrified, when presented with the article I asked could it please be disposed of! It was mine, but in the bag by mistake – and had been there for six months unnoticed, by me and by five airport security scans. This was the sixth flight with the offending article in the bag. Five previous scans at airports terminals had failed to detect a quite heavy metal object – pointed and a potentially dangerous weapon. How could that happen? Go figure.

Back to software. Anyone can find bugs in crappy software. Its like walking in bare feet in a room full of loaded mouse traps. But if you are testing software of high quality, it's harder to find bugs. It may be you give up before you have given yourself time to find the really (or not so) subtle ones.

Would a script help? I don't know. It might help because in principle, you have to follow it. But it might make you even more bored. All testers get bored/hungry/lazy/tired and are more or less incompetent or uninformed – you might give up before you've given yourself time to find anything significant. Our methods, such as they are, don't help much with this problem. Exploratory testing can be just as draining/boring as scripted.

I want people to test well. It seems to me that the need to test well increases with the criticality and quality of software, and motivation to test aligns pretty closely. Is exploration or scripted testing of very high quality software more effective? I'm not sure we'll ever know until someone does a proper experiment (and I don't mean testing a 2000 line of code toy program in a website or nuclear missile).

I do know that if you are testing high quality code and just before release it usually is of high quality, then you have to have your eyes open and your brain switched on. Both of em.

Tags: #exploratorytesting #error #scriptedtesting

Paul Gerrard My linkedin profile is here My Mastodon Account

First published 25/09/2009

The Tester's Pocketbook £10 (free p&p)

From the Preface...

The first aim is to provide a brief-as-possible introduction to the discipline called testing. Have you just been told you are responsible for testing something? Perhaps it is the implementation of a computer system in your business. But you have never done this sort of thing before! Quite a lot of the information and training on testing is technical, bureaucratic, complicated or dated. This Pocketbook presents a set of Test Axioms that will help you determine what your mission should be and how to gain commitment and understanding from your management. The technical stuff might then make more sense to you. The second aim of this Pocketbook is to provide a handy reference, an aide memoire or prompter for testing practitioners. But it isn’t a pocket dictionary or summary of procedures, techniques or processes. When you are stuck for what to do next, or believe there’s something wrong in your own or someone else’s testing or you want to understand their testing or improve it, this Pocketbook will prompt you to ask some germane questions of yourself, your team, your management, stakeholder or supplier.

Visit the official Tester's Pocketbook website

The Test Axioms website...

...sets out all of the axioms with some background to how they got started. The axioms are a 'work in progress' and you can also comment on the axioms on the site.


Tags: #Pocketbook #onlineorder #booksales

Paul Gerrard My linkedin profile is here My Mastodon Account

First published 29/09/2009

gettign a straw man on the table is more important than gettign it right

wisdom of crowds – the book

wide band delphi – wikipedia etc.

getting started

Tags: #ALF

Paul Gerrard My linkedin profile is here My Mastodon Account

First published 11/10/2009

Intranet-Based Testing Resource

Browse a demonstration version of the Knowledge Base

Paper-based process, guideline and training resources work effectively for practitioners who are learning new skills and finding their way around a comprehensive methodology. However, when the time comes to apply these skills in a project, paper-based material becomes cumbersome and difficult to use. Methodologies, guidelines and training materials may cover hundreds of pages of text. Testing templates are normally available on a LAN so are not integrated. Most practitioners end up copying the small number of diagrams required to understand the method and pinning this on their wall. Other resources are unevenly distributed across a LAN for which no one has responsibility for maintaining.

The Internet (and Intranets) offer a seamless way to bring these diverse materials together in a useful resource, available to all practitioners. Gerrard Consulting offer to build test infrastructure on Intranets or host it on our own web site. Comprising a large volume of standard reference material, the intention is to build a front-end to the product that supports project risk analysis and the generation of a comprehensive project test process, without the need for consultancy or specialist skills. The test process is built up from standard test types that reference standards, templates, methods, tools guides and training material as required.

The Tester Knowledge Base or TKB™ is a flexible but comprehensive resource for use by practitioners assembled from your existing methods and guidelines, our standard techniques and tools guides all fronted by a risk-based test process manager. The intention is for TKB™ to be a permanently available and useful assistant to test managers and practitioners alike.

Intranet based test infrastructure

Gerrard Consulting is now offering test infrastructure on an Intranet. The core of the test infrastructure is the Test Process. The TEST PROCESS SUMMARY represents the hub around which all the other components revolve. These components are:

Test StrategyCovers the identification of product risks, accommodation of technical constraints into high-level test planning.
Testing TrainingGeneric ISEB approved training material, supported by specialist material on internet, automation, management is integrated and fully cross-referenced into the infrastructure.
Test AutomationTest stages, test types are linked to pages on your own tools, and the CAST report for browsing.
StandardsInternal or industry test standards, test templates, glossary of terms are all included.
Test ImprovementThe TOM™ assessment forms are available on-line for comparison with industry surveys.
Test TechniquesTechnical test techniques for e-commerce environments as well as test design and test measurement techniques to BS 7925 are included.

Browse a demonstration version of the Knowledge Base

Use the Contact us form to pose your challenge. Perhaps we can help?

Tags: #tkb #testerknowledgebase

Paul Gerrard My linkedin profile is here My Mastodon Account

First published 21/09/2009

Test Assurance critiques your test approach for suitability and effectiveness. At the project initiation stage it’s a form of insurance, supporting the identification and application of the most appropriate testing approach. Test Assurance provides a subjective view with direct feedback to stakeholders and is totally independent from the delivery of the project. When projects get into difficulty, Test Assurance rapidly identifies the issues relating to testing and provides practical & pragmatic actions to get the project back on track. We can provide this service directly or work with your organisation to set-up an internal test assurance function. Both services deliver:



Tags: #testassurance

Paul Gerrard My linkedin profile is here My Mastodon Account

First published 24/09/2009

the interpretation of a test by a human being is influenced by the state of mind of the tester before that test is run.

Example: if we run a test and experience a failure: – if the failure is in the area we are 'looking for bugs' e.g. a newly written piece of code, we are disappointed but not unnerved (say) because we expect to find bugs in new code – and that is the purpose of the test. – if the failure is in an area we trust, to the degree that we have no specific tests in that area. The failure is unnerving, undermining our confidence. e.g. suppose we experience a database or operating system failure. This undermines our confidence in the platform as a whole. It also challenges our assumptions that the platform weas reliable (so we didn't have any DB or OS tests planned).

Our pre-disposition to some software align closely with out perception of risk. If we perceive the likelihood of failure of a platform or component is low (even though the impact is catastrophic) we are unlikely to test that platform or component. Our thinking is, “we are so unlikely to expose failure here – why bother?” We might also attribute the notion of (bad) luck to such areas. “If that test exposed a bug, we'd be so unlucky” By so doing, we've pre-conditioned ourselves to prepare tests that have a good chance of finding a bug. In fact, the mainstream teaching on test design techniques presses this very point: “techniques have a good chance of finding a bug”.

Tags: #ALF

Paul Gerrard My linkedin profile is here My Mastodon Account

First published 18/10/2009

Results-Based Management

  • Defining realistic expected results, based on appropriate analysis
  • Clearly identifying programme beneficiaries and designing programmes to meet their needs
  • Monitoring progress towards results with the use of appropriate indicators
  • Identifying and managing risks
  • Increasing knowledge by learning lessons and integrating them into decisions
  • Reporting on results achieved and the resources involved.

Definition of Project Intelligence

  • A framework of project reporting that is designed to drive out information to support effective results-based decision making
  • Geared towards reporting against project goals and risks, and the impact of change throughout a project
  • The format of reporting can use your own terminology and is aimed at business sponsors, programme managers and project managers
  • Fully integrated into the project life-cycle from inception to benefits realisation and bridges the organisational and cultural gaps between IT, the Business and Suppliers
  • Finally, it enables project managers to take account of unexpected information to build changes into the project plan rather than purely managing against an increasingly inaccurate initial plan.

Project Goals and Risks

  • PI is the knowledge of the status of a project with respect to its final and intermediate goals and the risks that threaten them
  • PI involves early identification and continuous monitoring of project goals and risks
  • PI delivers the information on the status of goals and risks of a project from initiation through to acceptance, deployment and usage of new or changed IT, business processes and other infrastructure.

Case Study Example

We've been using KYC services from Fully-Verified. We've to illustrate the use of the Results Chain Modelling method and the types of report that can be obtained directly from the Visio and Access databases, we have created a case study. The case study is a fashion retail organisation which has recently acquired an Italian retail chain and wishes to consolidate the IT systems across the two merged companies.

View the case study for 'Retail Limited'

Tags: #assurance #projectintelligence #pi

Paul Gerrard My linkedin profile is here My Mastodon Account

First published 18/10/2009

Retail Limited is a chain of fashion stores based in the UK, targeting high earning women within the 25 to 40 age range.

Recently, Retail Limited acquired an additional chain of stores in Italy. This has doubled the number of stores, making 250 in total, and the new stores have an excellent trading track record. However there are a number of business operational issues arising from this business venture. They include:

  • Management information on sales margins arrives at different times and different formats, making it difficult to monitor overall performance and identify regional differences.
  • The stock value information from the Italian stores is much more accurate and timely than the UK stores, demonstrating that there is a competitive advantage to be improved in the UK estate
  • The management time required to manage the increased number of suppliers is extensive and this needs to be rationalised. It’s essential that the best lines are identified and suppliers providing poor sales or margins are removed from the supply chain.
  • The business case behind the purchase of the Italian stores included being able to reduce the staffing within the Head Office teams and redirect savings made into opening additional stores in increase the geographic coverage. Operational running costs therefore have to be reduced to support the business case.

A programme of work has been initiated by the board to meet these business objectives; the activities identified to date include:

  • Adopting the store computer systems (front and back office) used within Italy as standard for the group
  • Retaining the management systems already in place within the UK
  • Identify the changes required to both to implement a common solution
  • Review the communication required to brief the staff so they support this initiative
  • Establish a training programme to support the implementation of the common solution

Case study results chain diagram

Example Activity Report

Example Risks Report

Example Impact/Goals Report

Tags: #assurance #projectintelligence #pi #casestudy

Paul Gerrard My linkedin profile is here My Mastodon Account