Paul Gerrard

My experiences, opinions in the Test Engineering business. I am republishing/rewriting old blogs from time to time.

First published 11/10/2009

Intranet-Based Testing Resource

Browse a demonstration version of the Knowledge Base

Paper-based process, guideline and training resources work effectively for practitioners who are learning new skills and finding their way around a comprehensive methodology. However, when the time comes to apply these skills in a project, paper-based material becomes cumbersome and difficult to use. Methodologies, guidelines and training materials may cover hundreds of pages of text. Testing templates are normally available on a LAN so are not integrated. Most practitioners end up copying the small number of diagrams required to understand the method and pinning this on their wall. Other resources are unevenly distributed across a LAN for which no one has responsibility for maintaining.

The Internet (and Intranets) offer a seamless way to bring these diverse materials together in a useful resource, available to all practitioners. Gerrard Consulting offer to build test infrastructure on Intranets or host it on our own web site. Comprising a large volume of standard reference material, the intention is to build a front-end to the product that supports project risk analysis and the generation of a comprehensive project test process, without the need for consultancy or specialist skills. The test process is built up from standard test types that reference standards, templates, methods, tools guides and training material as required.

The Tester Knowledge Base or TKB™ is a flexible but comprehensive resource for use by practitioners assembled from your existing methods and guidelines, our standard techniques and tools guides all fronted by a risk-based test process manager. The intention is for TKB™ to be a permanently available and useful assistant to test managers and practitioners alike.

Intranet based test infrastructure

Gerrard Consulting is now offering test infrastructure on an Intranet. The core of the test infrastructure is the Test Process. The TEST PROCESS SUMMARY represents the hub around which all the other components revolve. These components are:

Test StrategyCovers the identification of product risks, accommodation of technical constraints into high-level test planning.
Testing TrainingGeneric ISEB approved training material, supported by specialist material on internet, automation, management is integrated and fully cross-referenced into the infrastructure.
Test AutomationTest stages, test types are linked to pages on your own tools, and the CAST report for browsing.
StandardsInternal or industry test standards, test templates, glossary of terms are all included.
Test ImprovementThe TOM™ assessment forms are available on-line for comparison with industry surveys.
Test TechniquesTechnical test techniques for e-commerce environments as well as test design and test measurement techniques to BS 7925 are included.

Browse a demonstration version of the Knowledge Base

Use the Contact us form to pose your challenge. Perhaps we can help?

Tags: #tkb #testerknowledgebase

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account

First published 05/11/2009

Is it possible to define a set of axioms that provide a framework for software testing that all the variations of test approach currently being advocated align with or obey? In this respect, an axiom would be an uncontested principle; something self-evidently and so obviously true and not requiring proof. What would such test axioms look like?

This paper summarises some preliminary work on defining a set of Test Axioms. Some applications of the axioms that would appear useful are suggested for future development. It is also suggested the work of practitioners and researchers is on very shaky ground unless we refine and agree these Axioms. This is a work in progress.

Registered users can download the paper from the link below. If you aren't registered, you can register here.

Tags: #testaxioms #thinkingtools

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account

First published 05/11/2009

This talk setting out some thoughts on what's happening in the testing marketplace. Covers Benefits-Based Testing, Testing Frameworks, Software Success Improvement, Tester Skills and provides some recommendations for building your career.

Registered users can download the paper from the link below. If you aren't registered, you can register here.

Tags: #testingtrends

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account

First published 05/11/2009

Some foundation/visionary work to define ERP specific test methods and tools has already been performed by the author. However, the approach needs more research, rigor and proving in a commercial environment. Academic and commercial partners are sought to refine, develop and prove these methods and tools. An overview of the value of reporting test progress with reference to risk.

Registered users can download the paper from the link below. If you aren't registered, you can register here.

Tags: #risk-basedtesting #sap #erp

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account

First published 29/09/2009

gettign a straw man on the table is more important than gettign it right

wisdom of crowds – the book

wide band delphi – wikipedia etc.

getting started

Tags: #ALF

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account

First published 30/09/2009

This document presents an approach for:

  • Business Scenario Walkthroughs (BSW) and
  • Business Simulation Testing (BST)

Objectives of Business Simulation

The primary aim of BST is to provide final confirmation that the systems, processes and people work as an integrated whole to meet an organisations objectives to provide a sophisticated, efficient service to its customers. Business Simulation tests take a more process and people-oriented view of the entire system; User Acceptance Testing is more system-oriented.

The specific objectives of Business Simulation are to demonstrate that:

Processes

  • the business processes define the logical, step by step activities to perform the desired tasks
  • for each stage in the process, the inputs (information, resource) are available at the right time, in the right place to complete the task
  • the outputs (documents, events or other outcomes) are sufficiently well-defined to enable them to be produced reliably, completely, consistently
  • paths to be taken through the business process are efficient (i.e. no repeated tasks or convoluted paths)
  • the tasks in the Business Process are sufficiently well defined to enable people to perform the tasks regularly and consistently
  • the process can accommodate both common and unusual variations in inputs to enable tasks to be completed.

People

  • the people are familiar with the processes such that they can perform the tasks consistently, correctly and without supervision or assistance
  • people can cope with the variety of circumstances that arise when performing the tasks
  • people feel comfortable with the processes. (They don't need assistance, support or direction in performing their tasks)
  • customers perceive the operation and processes as being slick, effective and efficient
  • the training given to users provides them with adequate preparation for the task in hand.

Systems

  • the system provides guidance through the business process and leads them through the tasks correctly
  • the system is consistent, in terms of information required and provided, with the business process
  • the level of prompting within the system is about right (i.e. giving sufficient prompting without treating experienced users like first-time users)
  • response times for system transactions are compatible with the tasks which the system supports (i.e. fast response times where task durations are short)
  • users' perception is that the system helps the users, rather than hinders them. And that holds true, for if you were to cast a glance at a sap supplier portal, you'd be awed at how the agglutination of complexity and efficiency that SAP-powered systems are able to merge.

Business Scenario Walkthroughs

The purpose of the Business Scenario Walkthtrough (BSW) is to 'test' the business process and demonstrate the process itself is workable. The value of BSW is that they can be used to simulate how the business process will operate, but without the need for the IT system or other infrastructure to be available. The Walkthroughs usually involve business users who role-play and use may be made of props, rather than real systems.

This technique is excellent for refining user requirements for systems, but in this case the 'script' will identify the tasks which need to be supported by specified functionality in the system. It will verify that the mapping of functionality to the business processes (to be used in training) is sound and that the other objectives are met.

Test Materials

The test of the business processes requires certain materials to be prepared for use by the participants. These are:

  • Instructions to the participants
  • Materials to be tested (Business Process Descriptions)
  • Business Scenarios
  • Checklist for inspections and Walkthrough
  • Issue logging sheets.

Process

Inspections and Walkthroughs are labour-intensive and can involve 4-7 people (or more) and so can be expensive to perform. In order to gain the maximum benefit from the sessions, the sessions should be properly planned and detailed preparations made well in advance. Further it is essential that the procedures for the inspection and Walkthrough are followed to ensure all the materials to be tested are covered in time.

Preparation

The inventory of scenarios to be covered should be allocated to the inspectors based on their concerns for specific processes to ensure the work is distributed and every scenario is covered. A checklist of rules or requirements to assist inspectors in identifying issues will be prepared. Depending on the viewpoint of the inspector, a different checklist may be issued.

Inspection

The inspectors should use the scenarios to trace paths through the business processes and look out for issues of usability, consistency or deviation from rules on the checklist. The source document should be marked up and each issue identified should be logged. The marked up documents and issue list should be copied to the inspection leader.

Error Logging Meeting

The issue-lists compiled by the inspectors will be reviewed at an Error-Logging meeting. The purpose of the meeting is not to resolve errors at the meeting, but to work through the documents under test and compile an agreed list of errors to be resolved.

Inspection Follow-Up

The error log will be passed to the authors of the business processes for them to resolve. The corrected documents should be passed to the inspection leader for them to check that every error has been addressed. The person who raised the error should then confirm that the error has actually been resolved in an acceptable way.

Walkthrough

The Walkthrough is a stage-managed activity where the business scenarios are used to script a sequence of activities to be performed by business users in the real world. The participants each have copies of the 'script' and should understand their role in the Walkthrough. Other people, who have an interest or contribution to make, may attend as observers. Observers may raise incidents in the same way as the participants.

The Walkthrough is led by one person who ensures the scripts are followed and incidents are logged. The aim is to identify and log problems, but not to solve them. During each session, a 'scribe' who may also be an observer, logs the incidents.

As the Walkthrough proceeds, participants and observers should aim to identify any anomalies in the business process by referencing the BSW checklist.

Incident Logging

The Walkthrough is specifically intended to address the objectives for people and processes presented in section 1.4. Incidents will be raised for any problems relating to those objectives. For example:

  • the business processes fail to provide logical, step by step activities to perform the desired tasks
  • for a stage in the process, the inputs (information, resource) are not available at the right time, in the right place to complete the task

The other objectives for people and processes can be re-cast to represent incident types. Follow-Up Incidents will be prioritised and categorised as defined in the Test Strategy. Resolution of the incidents will be dealt with by the Operational Infrastructure team or the Training team.

Where significant changes to processes or re-training is involved, a re-test may be deemed necessary.

Sign-Off

The Test Manager must be satisfied that incidents have been correctly resolved and will monitor outstanding incidents closely.

The tester who raised the incident will be responsible for signing off incidents.

Business Simulation

The purpose of Business Simulation tests is to provide final confirmation that the system, processes and people are ready to go live. If one were to take the example of a business software for electricians, they'd know that in order to test the overall user facility, a simulation of the activities expected to take place will be staged. In essence, a series of prepared test scenarios will be executed simulate the variety of real business activity. The participants will handle the scenarios exactly as they would in a live situation, perform the tasks defined for the business process using the knowledge and skills gained in training.

It is intended, as far as possible to exercise the complete business processes from end to end. The simulation will cover both processes supported by the system and manual processes. The aim is to test the complete system, processes and people.

Test Materials

The simulation should be scripted. The business scenarios used in the BSW will be re-used as the basis of the BST scripts. There will be two documents used to script and record the results of every test:

  • Test script
  • Test log.

The scripts will be used drive the test and will beused by a test leader. Participants in the test will treat the situation as they will in business, and so will not normally use a test script but may have tables of test data values to use, if necessary. Think of it as a businessman carrying out a us import data procedure to assimilate information about their clients. Every scripts should be logged after it is over and any comments or problems must be recorded.

Script

The script will have the following information included:

  • A script reference (unique within the test)
  • A description of the purpose of the scenario to be made. E.g. to get a quotation for a particular product, or to pose an awkward situation for a telesales operator (e.g. not knowing a key piece of information).
  • Information which is required and relevant to the processing of the script – these are the data values which should find their way into the system
  • Instructions (responses) to situations where the information is specifically NOT available. (To simulate the situation where the participants do not have information)
  • A simple questionnaire to record whether the objective of the script was met, whether the service as experienced by participants was smooth and efficient (or timely, accurate, courteous etc.)

Test Log

The Test Log will be used by the participants to log the following:

  • The script reference number (to match the test leader's test script and comments).
  • Comments on difficulties experienced while executing the script. These could be:
    • Problems with the system.
    • Problems with the process.
    • Problems for which the participant was not adequately prepared during training.
  • Date and times for both the start and end of the test.
  • Initials of the participant
  • An indication of whether the objectives of the test were met.

Example Process

The notes below refer to a Business Simulation for a Call Centre application where an Automatic Call Distributor (ACD) and Windows client/server system with various interfaces to other systems was used.

Preparation

Caller Scripts will be distributed to the callers, Call Logs to the Teleagents who will accept the calls. Both Callers and Teleagents will be briefed on how the test will be conducted.

Test calls will be made by the Callers in a realistic manner (via the PSTN to the ACD numbers or, alternatively, as internal calls to the Teleagent stations) and be conducted exactly as will occur in live used.

Dummy Calls

At the opening, the caller should state that this is a test call and quote the reference number printed on the script. The Caller should not give any indication of the purpose of the test, but conduct the call in as realistic a way as possible.

At the end of the call, the Caller should record comments on the test call on their test script. The Teleagent should also log the call using the call reference, and record any comments on difficulties experienced and suggestions on how any difficulties were dealt with.

Tester Roles

The test calls do not need to be made simultaneously, so it is planned to have half of the trained Teleagents impersonate callers while the remaining agents take the calls. Roles would then be reversed to complete the test.

Results Checking

The completed test scripts and logs will be matched using the call reference. The test results will be analysed to identify any recurring problems or difficulties experienced from the point of view of the Callers or the Teleagents.

Where printed output (fulfilment pack) is generated for dispatch to the callers (dummy or real) addresses, the fulfilment packs will be checked to ensure:

  • every fulfilment pack is generated
  • the fulfilment pack is complete
  • the information presented on the fulfilment documents is correct, when compared with the information presented on the Callers Script.

Results checking will be performed by the Callers.

Incident Logging

Incidents will be raised for any of the following:

  • Failure or any other anomaly occurring within the system
  • problems encountered during a call by the Caller
  • problems encountered during a call by the Teleagent
  • failure to generate a fulfilment pack
  • wrong contents in a fulfilment pack
  • incorrect details presented in the fulfilment pack.

Follow-Up

Incidents will be prioritised and categorised as defined in the Test Strategy. Resolution of the incidents will be handled as follows:

  • System problems will be handled by the appropriate development team.
  • Process problems will be handled by Operational Infrastructure team.
  • People problems will be handled by the Training team.

Where significant changes to the system and/or processes or re-training is involved, a re-test may
be deemed necessary.

Sign-Off

The Test Manager must be satisfied that incidents have been correctly resolved and will monitor outstanding incidents closely.

The tester who raised the incident will be responsible for signing off incidents.



Tags: #businesssimulation #modeloffice

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account

First published 11/10/2009

Requirements are the foundations of every project yet we continue to build systems with requirements that have not been tested. We take care to test at every stage during design and development and yet the whole project may be based on untested foundations.

Functional system tests should be based around coverage of the functionality described in the requirements, but it is common for the design document to be used as the baseline for testing because the requirements can't be related to the end product. In the worst case, system tests can become large scale repetitions of unit tests. It is not surprising that many system tests fail to reveal requirements errors.

We ask users to perform acceptance tests against their original requirements. But who can blame enthusiastic users when they become overwhelmed by the task? The system bears so little resemblance to what they asked for that the acceptance test often becomes a superficial hands-on familiarisation exercise. This paper proposes that a unified view of requirements can improve the requirements gathering process, give users a clearer view of their expectations and provide a framework for more effective system and user acceptance tests.

A Unified Approach to System Functional Testing

Tags: #functionaltesting #behaviouranalysis

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account

First published 08/12/2009

The Project Euler site presents a collection of 'maths-related' problems to be solved by computer – 250+ of them and the site allows you to check your answers etc. You don't need to be a mathematician for all of them really, but you do need to be a good algorithm designer/programmer.

But it also reminded me of a recurring thought about something else. Could the problems be used as 'testing' problems too? The neat thing about some of them is that testing them isn't easy. Some problems have only one answer – they aren't very useful for testers – there is only one test case (or you need simply to write/reuse a parallel program to act as oracle). But others like problem 22 for example provide input files to process http://projecteuler.net/index.php?section=problems&id=22 The input file could be edited to generate variations – i.e. test cases to demonstrate the code works in general, not just a specific situation. Because some problems must work for infinite cases, simple test techniques probably aren't enough (are they ever?)

The Euler problem statements aren't designed for a specific technique – although they define requirements precisely, they are much closer to a real and challenging problem. The algorithms used to solve the problems are a mystery – and there may be many many ways of solving the same problem. (cf screens that simply update records in a database – pretty routine by comparison). The implementation doesn't influence our tests – its a true black box problem.

Teaching testing “the certified way” starts from the wrong end. We teach technique X, give out a prepared requirement (that happens to fit technique X – sort of) and say, “prepare tests using technique X”. But real life and requirements (or lack of them) aren't like that. Requirements don't usually tell you which technique to use! The process of test model selection (and associated test design and coverage approaches) is rarely taught – even though this is perhaps the most critical aspect of testing.

All of which makes me think that maybe we could identify a set of problem statements (not necessarily 'mathematical') that don't just decompose to partitions and boundaries, states or decisions and we should use these to teach and train. We teach students using small applications and ask them to practice their exploratory testing skills. Why don't we do the same with requirements?

Should training be driven by the need to solve problems rather than trot out memorised rote test design techniques? Why not create training exercises (and certification(?)) from written instructions, a specification, a pc and some software to test?

Wouldn't this be a better way to train people? To evaluate their ability as testers? This is old hat really – but still few people do it.

What stops us doing it? Is it because really – we aren't as advanced as we think we are? Test techniques will never prove correctness (we know that well) – they are just heuristic, but perhaps more systematic ways of selecting tests. Are the techniques really just clerical aids for bureaucratic processes rather than effective methods for defect detection and evidence collection? Where's the proof that says they are more effective? More effective – than what?

Who is looking at how one selects a test model? Is it just gut feel, IQ, luck, happens to be on my desk? Is there a method of model selection that could be defined and taught? Examined? Why don't we teach people to invent and choose test models? It seems to me that this needs much more attention that anyone gives it today.

What do you think?

Tags: #teaching #hands-ontraining #Euler #certification #model

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account

First published 18/10/2009

Results-Based Management

  • Defining realistic expected results, based on appropriate analysis
  • Clearly identifying programme beneficiaries and designing programmes to meet their needs
  • Monitoring progress towards results with the use of appropriate indicators
  • Identifying and managing risks
  • Increasing knowledge by learning lessons and integrating them into decisions
  • Reporting on results achieved and the resources involved.

Definition of Project Intelligence

  • A framework of project reporting that is designed to drive out information to support effective results-based decision making
  • Geared towards reporting against project goals and risks, and the impact of change throughout a project
  • The format of reporting can use your own terminology and is aimed at business sponsors, programme managers and project managers
  • Fully integrated into the project life-cycle from inception to benefits realisation and bridges the organisational and cultural gaps between IT, the Business and Suppliers
  • Finally, it enables project managers to take account of unexpected information to build changes into the project plan rather than purely managing against an increasingly inaccurate initial plan.

Project Goals and Risks

  • PI is the knowledge of the status of a project with respect to its final and intermediate goals and the risks that threaten them
  • PI involves early identification and continuous monitoring of project goals and risks
  • PI delivers the information on the status of goals and risks of a project from initiation through to acceptance, deployment and usage of new or changed IT, business processes and other infrastructure.

Case Study Example

We've been using KYC services from Fully-Verified. We've to illustrate the use of the Results Chain Modelling method and the types of report that can be obtained directly from the Visio and Access databases, we have created a case study. The case study is a fashion retail organisation which has recently acquired an Italian retail chain and wishes to consolidate the IT systems across the two merged companies.

View the case study for 'Retail Limited'

Tags: #assurance #projectintelligence #pi

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account

First published 18/10/2009

Retail Limited is a chain of fashion stores based in the UK, targeting high earning women within the 25 to 40 age range.

Recently, Retail Limited acquired an additional chain of stores in Italy. This has doubled the number of stores, making 250 in total, and the new stores have an excellent trading track record. However there are a number of business operational issues arising from this business venture. They include:

  • Management information on sales margins arrives at different times and different formats, making it difficult to monitor overall performance and identify regional differences.
  • The stock value information from the Italian stores is much more accurate and timely than the UK stores, demonstrating that there is a competitive advantage to be improved in the UK estate
  • The management time required to manage the increased number of suppliers is extensive and this needs to be rationalised. It’s essential that the best lines are identified and suppliers providing poor sales or margins are removed from the supply chain.
  • The business case behind the purchase of the Italian stores included being able to reduce the staffing within the Head Office teams and redirect savings made into opening additional stores in increase the geographic coverage. Operational running costs therefore have to be reduced to support the business case.

A programme of work has been initiated by the board to meet these business objectives; the activities identified to date include:

  • Adopting the store computer systems (front and back office) used within Italy as standard for the group
  • Retaining the management systems already in place within the UK
  • Identify the changes required to both to implement a common solution
  • Review the communication required to brief the staff so they support this initiative
  • Establish a training programme to support the implementation of the common solution

Case study results chain diagram

Example Activity Report

Example Risks Report

Example Impact/Goals Report

Tags: #assurance #projectintelligence #pi #casestudy

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account