Paul Gerrard

My experiences in the Test Engineering business; opinions, definitions and occasional polemics. Many have been rewritten to restore their original content.

First published 30/09/2009

In the most unlikely place, I recently came across a simple explanation of an idea which helps resolve a long-standing tension between the need for IT process maturity and the tendency of developers to work informally. Testers are often caught in the centre of this tension, particularly because of their need to write tests early with accurately predicted test results.

For some years, I’ve struggled to strike a balance between formalised development methods and the practical, but rather informal, adaptations almost always adopted by projects under pressure to deliver. I firmly believe in the value of "process" in software engineering; I don’t believe in the immature "back-of-fag-packet" practices prevalent in the early days of computing. But I have also noticed that a formalised process doesn’t guarantee a successful project, and absence of a method does not automatically doom a project to failure. Above all, I’ve mused long and hard over why, if a repeatable, mature process is such a good thing, don’t IS practitioners more readily accept methods, like SSADM, and quality standards which emphasise process improvement, like TickIT?

In my experience, formalised approaches are millstones to the average practitioner. Excessive paperwork, unnecessary overhead and cost are the usual cries of pain from the software developer. The quality manager, in turn, accuses the software developer of artisan tendencies and general intellectual disregard for the necessity of engineering discipline.

On balance, I tend to side with the developer that the formalities of quality management systems and hindrance than a help. In fact, in their emphasis I believe they cause a backlash. Practitioners are turned off, and miss the essential message that there are better ways to develop software than in an undisciplined free-for-all.

In the last five years, I’ve explored Rapid Application Development (RAD) as the middle ground. RAD flexible and doesn’t rely on a conventional forest of paper deliverables. What is striking to me is that RAD embodies so many practices which successful project teams often seem to adopt naturally.

As good as I think it is, RAD raises serious questions. Structured development methods rely on the outputs of a stage against a (notionally complete) definition prepared at the outset. By contrast, RAD dispenses with many intermediate products, focuses on validation of the end product, and generally conducts business a lot more interactively, in a climate which allows for a level of uncertainty.

Despite extensive definition of the products and process of a RAD development, proponents of conventional methods find it extremely difficult to imagine how it is possible to operate without the elements they believe are essential to good software engineering. These include complete, unambiguous statements of requirements, fully decomposed analyses and designs, and fully specified programs. These deliverables must be presented in strictly defined formats, subjected to formalised reviews and/or tested, and finally signed off. Moreover, it is not sufficient that happen strictly according to pre-defined procedures.

When the full formality of a quality-managed development processes is applied to the full range of software projects, it creates ludicrous anomalies. The worst such excess I encountered was that of a project leader spending a week preparing a project plan for a two-day project. The need for a lightweight, fleet a’ foot approach is self-evident to me, but how can it be explained convincingly where the line between the essential and the superfluous gets drawn?

With all this as background, I happened to be reading a book on object oriented modelling with the scintillating title, "UML Distilled", by Martin Fowler. The book is an abbreviated description of The Universal Modelling Language, being developed by three gurus of object oriented methods, Booch, Jacobson and Rumbaugh. The second chapter is an overview of a development process. Mr Fowler has "changed my life" with the following paragraph:

"Projects vary in how much ceremony they have. High-ceremony projects have a lot of formal paper deliverables, formal meetings, formal sign-offs. Low-ceremony projects might have an inception phase that consists of an hour’s chat with the project’s sponsor and a plan that sits on a spreadsheet. Naturally, the bigger the project, the more ceremony you need. The fundamentals of the phases still occur, but in very different ways."

This is practically all that Fowler says on the subject. But it is enough. In one simple word, ceremony, he encapsulates a clear distinction between formality and the trappings of formality. To amplify a bit, imagine the difference between a registry office and a church wedding. Both achieve the same central aim, legally marrying the couple, both are formal, both have defined processes, but one has a lot more ceremony than the other. It is not formality which distinguishes them, but rather the level of ceremony.

The same applies to development processes whether RAD or other. All software development activity should be formal, but it is not necessary to burden all projects with high ceremony. Quality management systems, quality standards and development methods need to factor in the concept of an appropriate level of ceremony.

So then, how much ceremony is enough? Fowler relates high-ceremony to large projects, but I think there are other factors. The two which we see as key are:

  • the organisational distance between the producers and the acceptors of the software; the greater the distance, the more ceremony required.
  • the length of time between the inception and delivery of software product; the greater the time, the more ceremony needed.

RAD projects can be low-ceremony because, by design, they have a short organisational distance between software producers and acceptors – they are part of the same team – and the time from inception to delivery is always short. By the same token, it is easy to define projects which legitimately require high ceremony, and would not be suitable for RAD.

The concept of high and low ceremony is not only useful for putting quality management and formal processes in practical perspective, I believe it is equally powerful in attracting reluctant developers to better practice. It has got to help communicate why and how a well-defined process doesn’t have to equate to meaningless paperwork and all the other negative associations developers have about formalised processes.

I wholeheartedly recommend, "UML Distilled, Applying the Standard Object Modelling Language" by Martin Fowler with Kendall Scott, published by Addison Wesley Longman, 1997. If you find the idea of ceremony intriguing, you will find many other useful ideas there. Try, for example, Fowler’s description of "skeleton" which complements a low ceremony process.

© 1997 Paul Herzlich.

Tags: #ceremony #paulherzlich

Paul Gerrard My linkedin profile is here My Mastodon Account

First published 08/12/2009

The Project Euler site presents a collection of 'maths-related' problems to be solved by computer – 250+ of them and the site allows you to check your answers etc. You don't need to be a mathematician for all of them really, but you do need to be a good algorithm designer/programmer.

But it also reminded me of a recurring thought about something else. Could the problems be used as 'testing' problems too? The neat thing about some of them is that testing them isn't easy. Some problems have only one answer – they aren't very useful for testers – there is only one test case (or you need simply to write/reuse a parallel program to act as oracle). But others like problem 22 for example provide input files to process http://projecteuler.net/index.php?section=problems&id=22 The input file could be edited to generate variations – i.e. test cases to demonstrate the code works in general, not just a specific situation. Because some problems must work for infinite cases, simple test techniques probably aren't enough (are they ever?)

The Euler problem statements aren't designed for a specific technique – although they define requirements precisely, they are much closer to a real and challenging problem. The algorithms used to solve the problems are a mystery – and there may be many many ways of solving the same problem. (cf screens that simply update records in a database – pretty routine by comparison). The implementation doesn't influence our tests – its a true black box problem.

Teaching testing “the certified way” starts from the wrong end. We teach technique X, give out a prepared requirement (that happens to fit technique X – sort of) and say, “prepare tests using technique X”. But real life and requirements (or lack of them) aren't like that. Requirements don't usually tell you which technique to use! The process of test model selection (and associated test design and coverage approaches) is rarely taught – even though this is perhaps the most critical aspect of testing.

All of which makes me think that maybe we could identify a set of problem statements (not necessarily 'mathematical') that don't just decompose to partitions and boundaries, states or decisions and we should use these to teach and train. We teach students using small applications and ask them to practice their exploratory testing skills. Why don't we do the same with requirements?

Should training be driven by the need to solve problems rather than trot out memorised rote test design techniques? Why not create training exercises (and certification(?)) from written instructions, a specification, a pc and some software to test?

Wouldn't this be a better way to train people? To evaluate their ability as testers? This is old hat really – but still few people do it.

What stops us doing it? Is it because really – we aren't as advanced as we think we are? Test techniques will never prove correctness (we know that well) – they are just heuristic, but perhaps more systematic ways of selecting tests. Are the techniques really just clerical aids for bureaucratic processes rather than effective methods for defect detection and evidence collection? Where's the proof that says they are more effective? More effective – than what?

Who is looking at how one selects a test model? Is it just gut feel, IQ, luck, happens to be on my desk? Is there a method of model selection that could be defined and taught? Examined? Why don't we teach people to invent and choose test models? It seems to me that this needs much more attention that anyone gives it today.

What do you think?

Tags: #teaching #hands-ontraining #Euler #certification #model

Paul Gerrard My linkedin profile is here My Mastodon Account

First published 30/09/2009

This document presents an approach for:

  • Business Scenario Walkthroughs (BSW) and
  • Business Simulation Testing (BST)

Objectives of Business Simulation

The primary aim of BST is to provide final confirmation that the systems, processes and people work as an integrated whole to meet an organisations objectives to provide a sophisticated, efficient service to its customers. Business Simulation tests take a more process and people-oriented view of the entire system; User Acceptance Testing is more system-oriented.

The specific objectives of Business Simulation are to demonstrate that:

Processes

  • the business processes define the logical, step by step activities to perform the desired tasks
  • for each stage in the process, the inputs (information, resource) are available at the right time, in the right place to complete the task
  • the outputs (documents, events or other outcomes) are sufficiently well-defined to enable them to be produced reliably, completely, consistently
  • paths to be taken through the business process are efficient (i.e. no repeated tasks or convoluted paths)
  • the tasks in the Business Process are sufficiently well defined to enable people to perform the tasks regularly and consistently
  • the process can accommodate both common and unusual variations in inputs to enable tasks to be completed.

People

  • the people are familiar with the processes such that they can perform the tasks consistently, correctly and without supervision or assistance
  • people can cope with the variety of circumstances that arise when performing the tasks
  • people feel comfortable with the processes. (They don't need assistance, support or direction in performing their tasks)
  • customers perceive the operation and processes as being slick, effective and efficient
  • the training given to users provides them with adequate preparation for the task in hand.

Systems

  • the system provides guidance through the business process and leads them through the tasks correctly
  • the system is consistent, in terms of information required and provided, with the business process
  • the level of prompting within the system is about right (i.e. giving sufficient prompting without treating experienced users like first-time users)
  • response times for system transactions are compatible with the tasks which the system supports (i.e. fast response times where task durations are short)
  • users' perception is that the system helps the users, rather than hinders them. And that holds true, for if you were to cast a glance at a sap supplier portal, you'd be awed at how the agglutination of complexity and efficiency that SAP-powered systems are able to merge.

Business Scenario Walkthroughs

The purpose of the Business Scenario Walkthtrough (BSW) is to 'test' the business process and demonstrate the process itself is workable. The value of BSW is that they can be used to simulate how the business process will operate, but without the need for the IT system or other infrastructure to be available. The Walkthroughs usually involve business users who role-play and use may be made of props, rather than real systems.

This technique is excellent for refining user requirements for systems, but in this case the 'script' will identify the tasks which need to be supported by specified functionality in the system. It will verify that the mapping of functionality to the business processes (to be used in training) is sound and that the other objectives are met.

Test Materials

The test of the business processes requires certain materials to be prepared for use by the participants. These are:

  • Instructions to the participants
  • Materials to be tested (Business Process Descriptions)
  • Business Scenarios
  • Checklist for inspections and Walkthrough
  • Issue logging sheets.

Process

Inspections and Walkthroughs are labour-intensive and can involve 4-7 people (or more) and so can be expensive to perform. In order to gain the maximum benefit from the sessions, the sessions should be properly planned and detailed preparations made well in advance. Further it is essential that the procedures for the inspection and Walkthrough are followed to ensure all the materials to be tested are covered in time.

Preparation

The inventory of scenarios to be covered should be allocated to the inspectors based on their concerns for specific processes to ensure the work is distributed and every scenario is covered. A checklist of rules or requirements to assist inspectors in identifying issues will be prepared. Depending on the viewpoint of the inspector, a different checklist may be issued.

Inspection

The inspectors should use the scenarios to trace paths through the business processes and look out for issues of usability, consistency or deviation from rules on the checklist. The source document should be marked up and each issue identified should be logged. The marked up documents and issue list should be copied to the inspection leader.

Error Logging Meeting

The issue-lists compiled by the inspectors will be reviewed at an Error-Logging meeting. The purpose of the meeting is not to resolve errors at the meeting, but to work through the documents under test and compile an agreed list of errors to be resolved.

Inspection Follow-Up

The error log will be passed to the authors of the business processes for them to resolve. The corrected documents should be passed to the inspection leader for them to check that every error has been addressed. The person who raised the error should then confirm that the error has actually been resolved in an acceptable way.

Walkthrough

The Walkthrough is a stage-managed activity where the business scenarios are used to script a sequence of activities to be performed by business users in the real world. The participants each have copies of the 'script' and should understand their role in the Walkthrough. Other people, who have an interest or contribution to make, may attend as observers. Observers may raise incidents in the same way as the participants.

The Walkthrough is led by one person who ensures the scripts are followed and incidents are logged. The aim is to identify and log problems, but not to solve them. During each session, a 'scribe' who may also be an observer, logs the incidents.

As the Walkthrough proceeds, participants and observers should aim to identify any anomalies in the business process by referencing the BSW checklist.

Incident Logging

The Walkthrough is specifically intended to address the objectives for people and processes presented in section 1.4. Incidents will be raised for any problems relating to those objectives. For example:

  • the business processes fail to provide logical, step by step activities to perform the desired tasks
  • for a stage in the process, the inputs (information, resource) are not available at the right time, in the right place to complete the task

The other objectives for people and processes can be re-cast to represent incident types. Follow-Up Incidents will be prioritised and categorised as defined in the Test Strategy. Resolution of the incidents will be dealt with by the Operational Infrastructure team or the Training team.

Where significant changes to processes or re-training is involved, a re-test may be deemed necessary.

Sign-Off

The Test Manager must be satisfied that incidents have been correctly resolved and will monitor outstanding incidents closely.

The tester who raised the incident will be responsible for signing off incidents.

Business Simulation

The purpose of Business Simulation tests is to provide final confirmation that the system, processes and people are ready to go live. If one were to take the example of a business software for electricians, they'd know that in order to test the overall user facility, a simulation of the activities expected to take place will be staged. In essence, a series of prepared test scenarios will be executed simulate the variety of real business activity. The participants will handle the scenarios exactly as they would in a live situation, perform the tasks defined for the business process using the knowledge and skills gained in training.

It is intended, as far as possible to exercise the complete business processes from end to end. The simulation will cover both processes supported by the system and manual processes. The aim is to test the complete system, processes and people.

Test Materials

The simulation should be scripted. The business scenarios used in the BSW will be re-used as the basis of the BST scripts. There will be two documents used to script and record the results of every test:

  • Test script
  • Test log.

The scripts will be used drive the test and will beused by a test leader. Participants in the test will treat the situation as they will in business, and so will not normally use a test script but may have tables of test data values to use, if necessary. Think of it as a businessman carrying out a us import data procedure to assimilate information about their clients. Every scripts should be logged after it is over and any comments or problems must be recorded.

Script

The script will have the following information included:

  • A script reference (unique within the test)
  • A description of the purpose of the scenario to be made. E.g. to get a quotation for a particular product, or to pose an awkward situation for a telesales operator (e.g. not knowing a key piece of information).
  • Information which is required and relevant to the processing of the script – these are the data values which should find their way into the system
  • Instructions (responses) to situations where the information is specifically NOT available. (To simulate the situation where the participants do not have information)
  • A simple questionnaire to record whether the objective of the script was met, whether the service as experienced by participants was smooth and efficient (or timely, accurate, courteous etc.)

Test Log

The Test Log will be used by the participants to log the following:

  • The script reference number (to match the test leader's test script and comments).
  • Comments on difficulties experienced while executing the script. These could be:
    • Problems with the system.
    • Problems with the process.
    • Problems for which the participant was not adequately prepared during training.
  • Date and times for both the start and end of the test.
  • Initials of the participant
  • An indication of whether the objectives of the test were met.

Example Process

The notes below refer to a Business Simulation for a Call Centre application where an Automatic Call Distributor (ACD) and Windows client/server system with various interfaces to other systems was used.

Preparation

Caller Scripts will be distributed to the callers, Call Logs to the Teleagents who will accept the calls. Both Callers and Teleagents will be briefed on how the test will be conducted.

Test calls will be made by the Callers in a realistic manner (via the PSTN to the ACD numbers or, alternatively, as internal calls to the Teleagent stations) and be conducted exactly as will occur in live used.

Dummy Calls

At the opening, the caller should state that this is a test call and quote the reference number printed on the script. The Caller should not give any indication of the purpose of the test, but conduct the call in as realistic a way as possible.

At the end of the call, the Caller should record comments on the test call on their test script. The Teleagent should also log the call using the call reference, and record any comments on difficulties experienced and suggestions on how any difficulties were dealt with.

Tester Roles

The test calls do not need to be made simultaneously, so it is planned to have half of the trained Teleagents impersonate callers while the remaining agents take the calls. Roles would then be reversed to complete the test.

Results Checking

The completed test scripts and logs will be matched using the call reference. The test results will be analysed to identify any recurring problems or difficulties experienced from the point of view of the Callers or the Teleagents.

Where printed output (fulfilment pack) is generated for dispatch to the callers (dummy or real) addresses, the fulfilment packs will be checked to ensure:

  • every fulfilment pack is generated
  • the fulfilment pack is complete
  • the information presented on the fulfilment documents is correct, when compared with the information presented on the Callers Script.

Results checking will be performed by the Callers.

Incident Logging

Incidents will be raised for any of the following:

  • Failure or any other anomaly occurring within the system
  • problems encountered during a call by the Caller
  • problems encountered during a call by the Teleagent
  • failure to generate a fulfilment pack
  • wrong contents in a fulfilment pack
  • incorrect details presented in the fulfilment pack.

Follow-Up

Incidents will be prioritised and categorised as defined in the Test Strategy. Resolution of the incidents will be handled as follows:

  • System problems will be handled by the appropriate development team.
  • Process problems will be handled by Operational Infrastructure team.
  • People problems will be handled by the Training team.

Where significant changes to the system and/or processes or re-training is involved, a re-test may
be deemed necessary.

Sign-Off

The Test Manager must be satisfied that incidents have been correctly resolved and will monitor outstanding incidents closely.

The tester who raised the incident will be responsible for signing off incidents.



Tags: #businesssimulation #modeloffice

Paul Gerrard My linkedin profile is here My Mastodon Account

First published 11/10/2009

Requirements are the foundations of every project yet we continue to build systems with requirements that have not been tested. We take care to test at every stage during design and development and yet the whole project may be based on untested foundations.

Functional system tests should be based around coverage of the functionality described in the requirements, but it is common for the design document to be used as the baseline for testing because the requirements can't be related to the end product. In the worst case, system tests can become large scale repetitions of unit tests. It is not surprising that many system tests fail to reveal requirements errors.

We ask users to perform acceptance tests against their original requirements. But who can blame enthusiastic users when they become overwhelmed by the task? The system bears so little resemblance to what they asked for that the acceptance test often becomes a superficial hands-on familiarisation exercise. This paper proposes that a unified view of requirements can improve the requirements gathering process, give users a clearer view of their expectations and provide a framework for more effective system and user acceptance tests.

A Unified Approach to System Functional Testing

Tags: #functionaltesting #behaviouranalysis

Paul Gerrard My linkedin profile is here My Mastodon Account

First published 30/09/2009

This document presents an approach for:

  • Business Scenario Walkthroughs (BSW) and
  • Business Simulation Testing (BST)

Objectives of Business Simulation

The primary aim of BST is to provide final confirmation that the systems, processes and people work as an integrated whole to meet an organisations objectives to provide a sophisticated, efficient service to its customers. Business Simulation tests take a more process and people-oriented view of the entire system; User Acceptance Testing is more system-oriented.

The specific objectives of Business Simulation are to demonstrate that:

Processes

  • the business processes define the logical, step by step activities to perform the desired tasks
  • for each stage in the process, the inputs (information, resource) are available at the right time, in the right place to complete the task
  • the outputs (documents, events or other outcomes) are sufficiently well-defined to enable them to be produced reliably, completely, consistently
  • paths to be taken through the business process are efficient (i.e. no repeated tasks or convoluted paths)
  • the tasks in the Business Process are sufficiently well defined to enable people to perform the tasks regularly and consistently
  • the process can accommodate both common and unusual variations in inputs to enable tasks to be completed.

People

  • the people are familiar with the processes such that they can perform the tasks consistently, correctly and without supervision or assistance
  • people can cope with the variety of circumstances that arise when performing the tasks
  • people feel comfortable with the processes. (They don't need assistance, support or direction in performing their tasks)
  • customers perceive the operation and processes as being slick, effective and efficient
  • the training given to users provides them with adequate preparation for the task in hand.

Systems

  • the system provides guidance through the business process and leads them through the tasks correctly
  • the system is consistent, in terms of information required and provided, with the business process
  • the level of prompting within the system is about right (i.e. giving sufficient prompting without treating experienced users like first-time users)
  • response times for system transactions are compatible with the tasks which the system supports (i.e. fast response times where task durations are short)
  • users' perception is that the system helps the users, rather than hinders them. And that holds true, for if you were to cast a glance at a sap supplier portal, you'd be awed at how the agglutination of complexity and efficiency that SAP-powered systems are able to merge.

Business Scenario Walkthroughs

The purpose of the Business Scenario Walkthtrough (BSW) is to 'test' the business process and demonstrate the process itself is workable. The value of BSW is that they can be used to simulate how the business process will operate, but without the need for the IT system or other infrastructure to be available. The Walkthroughs usually involve business users who role-play and use may be made of props, rather than real systems.

This technique is excellent for refining user requirements for systems, but in this case the 'script' will identify the tasks which need to be supported by specified functionality in the system. It will verify that the mapping of functionality to the business processes (to be used in training) is sound and that the other objectives are met.

Test Materials

The test of the business processes requires certain materials to be prepared for use by the participants. These are:

  • Instructions to the participants
  • Materials to be tested (Business Process Descriptions)
  • Business Scenarios
  • Checklist for inspections and Walkthrough
  • Issue logging sheets.

Process

Inspections and Walkthroughs are labour-intensive and can involve 4-7 people (or more) and so can be expensive to perform. In order to gain the maximum benefit from the sessions, the sessions should be properly planned and detailed preparations made well in advance. Further it is essential that the procedures for the inspection and Walkthrough are followed to ensure all the materials to be tested are covered in time.

Preparation

The inventory of scenarios to be covered should be allocated to the inspectors based on their concerns for specific processes to ensure the work is distributed and every scenario is covered. A checklist of rules or requirements to assist inspectors in identifying issues will be prepared. Depending on the viewpoint of the inspector, a different checklist may be issued.

Inspection

The inspectors should use the scenarios to trace paths through the business processes and look out for issues of usability, consistency or deviation from rules on the checklist. The source document should be marked up and each issue identified should be logged. The marked up documents and issue list should be copied to the inspection leader.

Error Logging Meeting

The issue-lists compiled by the inspectors will be reviewed at an Error-Logging meeting. The purpose of the meeting is not to resolve errors at the meeting, but to work through the documents under test and compile an agreed list of errors to be resolved.

Inspection Follow-Up

The error log will be passed to the authors of the business processes for them to resolve. The corrected documents should be passed to the inspection leader for them to check that every error has been addressed. The person who raised the error should then confirm that the error has actually been resolved in an acceptable way.

Walkthrough

The Walkthrough is a stage-managed activity where the business scenarios are used to script a sequence of activities to be performed by business users in the real world. The participants each have copies of the 'script' and should understand their role in the Walkthrough. Other people, who have an interest or contribution to make, may attend as observers. Observers may raise incidents in the same way as the participants.

The Walkthrough is led by one person who ensures the scripts are followed and incidents are logged. The aim is to identify and log problems, but not to solve them. During each session, a 'scribe' who may also be an observer, logs the incidents.

As the Walkthrough proceeds, participants and observers should aim to identify any anomalies in the business process by referencing the BSW checklist.

Incident Logging

The Walkthrough is specifically intended to address the objectives for people and processes presented in section 1.4. Incidents will be raised for any problems relating to those objectives. For example:

  • the business processes fail to provide logical, step by step activities to perform the desired tasks
  • for a stage in the process, the inputs (information, resource) are not available at the right time, in the right place to complete the task

The other objectives for people and processes can be re-cast to represent incident types. Follow-Up Incidents will be prioritised and categorised as defined in the Test Strategy. Resolution of the incidents will be dealt with by the Operational Infrastructure team or the Training team.

Where significant changes to processes or re-training is involved, a re-test may be deemed necessary.

Sign-Off

The Test Manager must be satisfied that incidents have been correctly resolved and will monitor outstanding incidents closely.

The tester who raised the incident will be responsible for signing off incidents.

Business Simulation

The purpose of Business Simulation tests is to provide final confirmation that the system, processes and people are ready to go live. If one were to take the example of a business software for electricians, they'd know that in order to test the overall user facility, a simulation of the activities expected to take place will be staged. In essence, a series of prepared test scenarios will be executed simulate the variety of real business activity. The participants will handle the scenarios exactly as they would in a live situation, perform the tasks defined for the business process using the knowledge and skills gained in training.

It is intended, as far as possible to exercise the complete business processes from end to end. The simulation will cover both processes supported by the system and manual processes. The aim is to test the complete system, processes and people.

Test Materials

The simulation should be scripted. The business scenarios used in the BSW will be re-used as the basis of the BST scripts. There will be two documents used to script and record the results of every test:

  • Test script
  • Test log.

The scripts will be used drive the test and will beused by a test leader. Participants in the test will treat the situation as they will in business, and so will not normally use a test script but may have tables of test data values to use, if necessary. Think of it as a businessman carrying out a us import data procedure to assimilate information about their clients. Every scripts should be logged after it is over and any comments or problems must be recorded.

Script

The script will have the following information included:

  • A script reference (unique within the test)
  • A description of the purpose of the scenario to be made. E.g. to get a quotation for a particular product, or to pose an awkward situation for a telesales operator (e.g. not knowing a key piece of information).
  • Information which is required and relevant to the processing of the script – these are the data values which should find their way into the system
  • Instructions (responses) to situations where the information is specifically NOT available. (To simulate the situation where the participants do not have information)
  • A simple questionnaire to record whether the objective of the script was met, whether the service as experienced by participants was smooth and efficient (or timely, accurate, courteous etc.)

Test Log

The Test Log will be used by the participants to log the following:

  • The script reference number (to match the test leader's test script and comments).
  • Comments on difficulties experienced while executing the script. These could be:
    • Problems with the system.
    • Problems with the process.
    • Problems for which the participant was not adequately prepared during training.
  • Date and times for both the start and end of the test.
  • Initials of the participant
  • An indication of whether the objectives of the test were met.

Example Process

The notes below refer to a Business Simulation for a Call Centre application where an Automatic Call Distributor (ACD) and Windows client/server system with various interfaces to other systems was used.

Preparation

Caller Scripts will be distributed to the callers, Call Logs to the Teleagents who will accept the calls. Both Callers and Teleagents will be briefed on how the test will be conducted.

Test calls will be made by the Callers in a realistic manner (via the PSTN to the ACD numbers or, alternatively, as internal calls to the Teleagent stations) and be conducted exactly as will occur in live used.

Dummy Calls

At the opening, the caller should state that this is a test call and quote the reference number printed on the script. The Caller should not give any indication of the purpose of the test, but conduct the call in as realistic a way as possible.

At the end of the call, the Caller should record comments on the test call on their test script. The Teleagent should also log the call using the call reference, and record any comments on difficulties experienced and suggestions on how any difficulties were dealt with.

Tester Roles

The test calls do not need to be made simultaneously, so it is planned to have half of the trained Teleagents impersonate callers while the remaining agents take the calls. Roles would then be reversed to complete the test.

Results Checking

The completed test scripts and logs will be matched using the call reference. The test results will be analysed to identify any recurring problems or difficulties experienced from the point of view of the Callers or the Teleagents.

Where printed output (fulfilment pack) is generated for dispatch to the callers (dummy or real) addresses, the fulfilment packs will be checked to ensure:

  • every fulfilment pack is generated
  • the fulfilment pack is complete
  • the information presented on the fulfilment documents is correct, when compared with the information presented on the Callers Script.

Results checking will be performed by the Callers.

Incident Logging

Incidents will be raised for any of the following:

  • Failure or any other anomaly occurring within the system
  • problems encountered during a call by the Caller
  • problems encountered during a call by the Teleagent
  • failure to generate a fulfilment pack
  • wrong contents in a fulfilment pack
  • incorrect details presented in the fulfilment pack.

Follow-Up

Incidents will be prioritised and categorised as defined in the Test Strategy. Resolution of the incidents will be handled as follows:

  • System problems will be handled by the appropriate development team.
  • Process problems will be handled by Operational Infrastructure team.
  • People problems will be handled by the Training team.

Where significant changes to the system and/or processes or re-training is involved, a re-test may
be deemed necessary.

Sign-Off

The Test Manager must be satisfied that incidents have been correctly resolved and will monitor outstanding incidents closely.

The tester who raised the incident will be responsible for signing off incidents.



Tags: #businesssimulation #modeloffice

Paul Gerrard My linkedin profile is here My Mastodon Account

First published 11/10/2009

Requirements are the foundations of every project yet we continue to build systems with requirements that have not been tested. We take care to test at every stage during design and development and yet the whole project may be based on untested foundations.

Functional system tests should be based around coverage of the functionality described in the requirements, but it is common for the design document to be used as the baseline for testing because the requirements can't be related to the end product. In the worst case, system tests can become large scale repetitions of unit tests. It is not surprising that many system tests fail to reveal requirements errors.

We ask users to perform acceptance tests against their original requirements. But who can blame enthusiastic users when they become overwhelmed by the task? The system bears so little resemblance to what they asked for that the acceptance test often becomes a superficial hands-on familiarisation exercise. This paper proposes that a unified view of requirements can improve the requirements gathering process, give users a clearer view of their expectations and provide a framework for more effective system and user acceptance tests.

A Unified Approach to System Functional Testing

Tags: #functionaltesting #behaviouranalysis

Paul Gerrard My linkedin profile is here My Mastodon Account

First published 04/11/2009

Hi,

With regards to the ATM accreditation -see attached. The cost of getting accredited in the UK is quite low – UKP 300 I believe. ISTQB UK will reuse the accreditation above.

Fran O'Hara is presenting the course this week. Next week I hope to get feedback from him and I'll update the materials to address the mandatory points in the review and add changes as suggested by Fran.

I've had no word from ISTQB on availability of sample papers as yet. I'll ask again.

I have taken the ATA exam and I thought that around one third of the questions were suspicious. That is, I thought the question did not have an answer or the provided answers were ambiguous or wrong. Interestingly, there are no comments from the client on the exam are there?

If their objective is to pass the exam only, then their objective is not the same as the ISTQB scheme. The training course has been reviewed against the ATA Syllabus which explicitly states a set of learning objectives (in fact they are really training objectives, but that's another debate). The exam is currently a poor exam and does not examine the syllabus content well. It certainly is not focused on the same 'objectives' as the syllabus and training material. If the candidates joined the course thinking the only objective was to pass the exam, then they will not pay attention to the content that is the basis of the exam. I would argue that the best way to pass the exam is to attend to the syllabus. The ‘exam technique’ is very simple – and the same as the Foundation exam. A shortage of test questions should not impair their ability to pass the exam. The exam is based on the SYLLABUS. The course is based on the SYLLABUS.

Here's my comments on their points – in RED.

  • The sessions were not oriented to pass the exam. They were general testing lessons� the main objective of the training should be to prepare the assistants for the examination. That is not the intention of the ISTQB scheme. If we offered a course that focused only on passing the exam we would certainly lose our accreditation. Agree that a sample paper is required ((ISTQB to provide). It is extremely hard to prepare course material for the exam without having a sample paper. Although I have taken the exam (and found serious fault with it) I have not got a copy and was not allowed to give feedback. Most of the dedicated time in the training was not usable to pass the exam: The training was more oriented to test management than test analyst, which was the objective. I don’t know if this is true of the material, or the way you presented it. Since the course is meant to be advanced and not basic, the material will be more focused on the tester making choices rather than doing basic exercises. The syllabus dedicates three whole days to test technques – not management specific material. For example: a lot of time dedicated to risk management theory and practice and the specific weigh in the exam for that section was not so high. True. The section on risk based testing is too long and needs cutting down.
  • More exercises needed: the training included some exercises but were similar to the foundation level ones. The training provider must be responsible to find and include advanced exercises. The exercises are similar to the Foundation course exercises because the Foundation syllabus is reused. The difficulty of the ATA exercises is slightly higher. However, because the exam presents multiple choice answers, the best technique for obtaining the correct answer may not be how one tests. This is a failure of the exam not the training material. (Until we get a sample paper, how can we give examples of exam questions?) o Examples of exercises: ? For an specific situation: How many test conditions.. using this test technique??? Not sure I understand. Is the comment, “can we have exercises that ask, how many conditions would be derived using a certain technique?” Easily done – just count the conditions in the answer. ? From our experience the exercises included in the exam were similar to the basic one but more complex. Are they saying the ATA exam was like the Foundation exam – but more difficult? That is to be expected. Perhaps we provide some exercises from Foundation materials but make them a little more involved. There are a small number, but I agree we need to provide a lot more.
  • The training would include more reference to the foundation level Er not sure what this means. Could or should? Are they asking for more content lifted from Foundation scheme to be included in the course? In fact, much of the reusable material is already in the course (it’s much easier to reuse rather than write new!) Not sure what they are asking here.
  • Sample exams needed Agreed!
  • A lot of time dedicated in the sessions to theory than can be just self studied by assistants: i.e. Quality attributes This is possible. Perhaps we could extract content from the syllabus and publish this as a pre-read for the course? There are some Q&A in thehandouts already, but more could be added. However, a LOT of the syllabus could be treated this way.
  • More practical needed for the following modules: o Defect management Isn’t this covered in the Advanced Test Management syllabus? (They want LESS management don’t they?) o Reviews: in the training we covered theory (types, roles�) but not practical questions like the exam�s We don’t know what the review questions in the exam look like. They are unlikely to be ‘practical’.

The general conclusion is that the training should be pass exam oriented. See my comment above. If this is REALLY what they want – they do not need a training course. They should just memorise the syllabus, since that is what the exam is based on. Some of the comments above, I think are legitimate and we need to add/remove/change content in the course. Some of the ATM material could be reused as it is possibly more compact. (Risk, incidents, reviews). Yes we need more sample questions – agreed! But I think some of the comments above betray a false objective. If we taught an exam-oriented course they would pass the exam but not learn much about testing. This is definitely NOT what the ISTQB scheme is about. However, people like Rex Black are cashing in on this. See here: https://store.rbcs-us.com/index.php?option=com_ixxocart&Itemid=6&p=product&id=16&parent=6 What will you suggest to the client re: getting their people through the exams? I hope some of the text above will help. If you do have specific points (other than above) let me know. I will spend time in the next 2-3 weeks updating the materials.

Tags: #ALF

Paul Gerrard My linkedin profile is here My Mastodon Account

First published 21/09/2009

Test Assurance critiques your test approach for suitability and effectiveness. At the project initiation stage it’s a form of insurance, supporting the identification and application of the most appropriate testing approach. Test Assurance provides a subjective view with direct feedback to stakeholders and is totally independent from the delivery of the project. When projects get into difficulty, Test Assurance rapidly identifies the issues relating to testing and provides practical & pragmatic actions to get the project back on track. We can provide this service directly or work with your organisation to set-up an internal test assurance function. Both services deliver:



Tags: #testassurance

Paul Gerrard My linkedin profile is here My Mastodon Account

First published 05/11/2009

This talk setting out some thoughts on what's happening in the testing marketplace. Covers Benefits-Based Testing, Testing Frameworks, Software Success Improvement, Tester Skills and provides some recommendations for building your career.

Registered users can download the paper from the link below. If you aren't registered, you can register here.

Tags: #testingtrends

Paul Gerrard My linkedin profile is here My Mastodon Account

First published 05/11/2009

Is it possible to define a set of axioms that provide a framework for software testing that all the variations of test approach currently being advocated align with or obey? In this respect, an axiom would be an uncontested principle; something self-evidently and so obviously true and not requiring proof. What would such test axioms look like?

This paper summarises some preliminary work on defining a set of Test Axioms. Some applications of the axioms that would appear useful are suggested for future development. It is also suggested the work of practitioners and researchers is on very shaky ground unless we refine and agree these Axioms. This is a work in progress.

Registered users can download the paper from the link below. If you aren't registered, you can register here.

Tags: #testaxioms #thinkingtools

Paul Gerrard My linkedin profile is here My Mastodon Account