Paul Gerrard

My experiences in the Test Engineering business; opinions, definitions and occasional polemics. Many have been rewritten to restore their original content.

First published 03/12/2009

You may or may not find this response useful. :–)

“It depends”.

The “it depends” response is an old joke. I think I was advised by David Gelperin in the early 90s that if someone says “it depends” your response should be “ahh, you must be a consultant!”

But it does depend. It always has and will do. The context-driven guys provide a little more information – “it depends on context”. But this doesn't answer the question of course – we still get asked by people who really do need an answer – i.e. project managers who need to plan and to resource teams.

As an aside, there’s an interesting discussion of “stupid questions” here. This question isn't stupid, but the blog post is interesting.

In what follows – let me assume you’ve been asked the question by a project manager.

The 'best' dev/tester ratio is possibly the most context-specific question in testing. What are the influences on the answer?

  • What is the capability/competence of the developers and testers respectively and absolutely?
  • What do dev and test WANT to do versus what you (as a manager) want them to do?
  • To what degree are the testers involved in early testing (they just system test? Or are involved from concept thru to acceptance etc.)
  • What is the risk-profile of the project?
  • Do stakeholders care if the system works or not?
  • What is the scale of the development?
  • What is the ratio of new/custom code versus reused (and trusted) code/infrastructure?
  • How trustworthy is the to-be-reused code anyway?
  • How testable will the delivered system be?
  • Do resources come in integer whole numbers or fractions?
  • And so on, and so on…
Even if you had the answers to these questions to six significant digits – you still aren’t much wiser because some other pieces of information are missing. These are possibly known to the project manager who is asking the question:
  • How much budget is available? (knowing this – he has an answer already)
  • Does the project manager trust your estimates and recommendations or does he want references to industry ‘standards’? i.e. he wants a crutch, not an answer.
  • Is the project manager competent and honest?
So we’re left with this awkward situation. Are you being asked the question to make the project manager feel better; to give him reassurance he has the right answer already? Does he know his budget is low and needs to articulate a case for justifying more? Does he think the budget is too high and wants a case for spending less?

Does he regard you as competent and trust what you say anyway? This final point could depend on his competence as much as yours! References to ‘higher authorities’ satisfy some people (if all they want is back-covering), but other folk want personal, direct, relevant experience and data.

I think a bit of von Neumann game theory may be required to analyse the situation!

Here’s a suggestion. Suppose the PM says he has 4 developers and needs to know how many testers are required. I’d suggest he has a choice:

  • 4 dev – 1 tester: onus is on the devs to do good testing, the tester will advise, cherry pick areas to test and focus on high impact problems. PM needs to micro manage the devs, and the tester is a free-agent.
  • 4 dev – 2 testers: testers partner with dev to ‘keep them honest’. Testers pair up to help with dev testing (whether TDD or not). Testers keep track of the coverage and focus on covering gaps and doing system-level testing. PM manages dev based on tester output.
  • 4 dev – 3 testers: testers accountable for testing. Testers shadow developers in all dev test activities. System testing is thorough. Testers set targets for achievement and provide evidence of it to PM. PM manages on the basis of test reports.
  • 4 dev – 4 testers: testers take ownership of all testing. But is this still Agile??? ;–)
Perhaps it’s worth asking the PM for dev and tester job specs and working out what proportion of their activities are actually dev and test? Don’t hire testers at all – just hire good developers (i.e. those who can test). If he has poor developers (who can’t/won’t test) then the ratio of testers goes up because someone has to do their job for them.

Tags: #estimation #testerdeveloperratio

Paul Gerrard My linkedin profile is here My Mastodon Account

First published 11/10/2009

Intranet-Based Testing Resource

Browse a demonstration version of the Knowledge Base

Paper-based process, guideline and training resources work effectively for practitioners who are learning new skills and finding their way around a comprehensive methodology. However, when the time comes to apply these skills in a project, paper-based material becomes cumbersome and difficult to use. Methodologies, guidelines and training materials may cover hundreds of pages of text. Testing templates are normally available on a LAN so are not integrated. Most practitioners end up copying the small number of diagrams required to understand the method and pinning this on their wall. Other resources are unevenly distributed across a LAN for which no one has responsibility for maintaining.

The Internet (and Intranets) offer a seamless way to bring these diverse materials together in a useful resource, available to all practitioners. Gerrard Consulting offer to build test infrastructure on Intranets or host it on our own web site. Comprising a large volume of standard reference material, the intention is to build a front-end to the product that supports project risk analysis and the generation of a comprehensive project test process, without the need for consultancy or specialist skills. The test process is built up from standard test types that reference standards, templates, methods, tools guides and training material as required.

The Tester Knowledge Base or TKB™ is a flexible but comprehensive resource for use by practitioners assembled from your existing methods and guidelines, our standard techniques and tools guides all fronted by a risk-based test process manager. The intention is for TKB™ to be a permanently available and useful assistant to test managers and practitioners alike.

Intranet based test infrastructure

Gerrard Consulting is now offering test infrastructure on an Intranet. The core of the test infrastructure is the Test Process. The TEST PROCESS SUMMARY represents the hub around which all the other components revolve. These components are:

Test StrategyCovers the identification of product risks, accommodation of technical constraints into high-level test planning.
Testing TrainingGeneric ISEB approved training material, supported by specialist material on internet, automation, management is integrated and fully cross-referenced into the infrastructure.
Test AutomationTest stages, test types are linked to pages on your own tools, and the CAST report for browsing.
StandardsInternal or industry test standards, test templates, glossary of terms are all included.
Test ImprovementThe TOM™ assessment forms are available on-line for comparison with industry surveys.
Test TechniquesTechnical test techniques for e-commerce environments as well as test design and test measurement techniques to BS 7925 are included.

Browse a demonstration version of the Knowledge Base

Use the Contact us form to pose your challenge. Perhaps we can help?

Tags: #tkb #testerknowledgebase

Paul Gerrard My linkedin profile is here My Mastodon Account

First published 05/11/2009

Some foundation/visionary work to define ERP specific test methods and tools has already been performed by the author. However, the approach needs more research, rigor and proving in a commercial environment. Academic and commercial partners are sought to refine, develop and prove these methods and tools. An overview of the value of reporting test progress with reference to risk.

Registered users can download the paper from the link below. If you aren't registered, you can register here.

Tags: #risk-basedtesting #sap #erp

Paul Gerrard My linkedin profile is here My Mastodon Account

First published 25/09/2009

The Tester's Pocketbook £10 (free p&p)

From the Preface...

The first aim is to provide a brief-as-possible introduction to the discipline called testing. Have you just been told you are responsible for testing something? Perhaps it is the implementation of a computer system in your business. But you have never done this sort of thing before! Quite a lot of the information and training on testing is technical, bureaucratic, complicated or dated. This Pocketbook presents a set of Test Axioms that will help you determine what your mission should be and how to gain commitment and understanding from your management. The technical stuff might then make more sense to you. The second aim of this Pocketbook is to provide a handy reference, an aide memoire or prompter for testing practitioners. But it isn’t a pocket dictionary or summary of procedures, techniques or processes. When you are stuck for what to do next, or believe there’s something wrong in your own or someone else’s testing or you want to understand their testing or improve it, this Pocketbook will prompt you to ask some germane questions of yourself, your team, your management, stakeholder or supplier.

Visit the official Tester's Pocketbook website

The Test Axioms website...

...sets out all of the axioms with some background to how they got started. The axioms are a 'work in progress' and you can also comment on the axioms on the site.


Tags: #Pocketbook #onlineorder #booksales

Paul Gerrard My linkedin profile is here My Mastodon Account

First published 29/09/2009

gettign a straw man on the table is more important than gettign it right

wisdom of crowds – the book

wide band delphi – wikipedia etc.

getting started

Tags: #ALF

Paul Gerrard My linkedin profile is here My Mastodon Account

First published 18/10/2009

Results-Based Management

  • Defining realistic expected results, based on appropriate analysis
  • Clearly identifying programme beneficiaries and designing programmes to meet their needs
  • Monitoring progress towards results with the use of appropriate indicators
  • Identifying and managing risks
  • Increasing knowledge by learning lessons and integrating them into decisions
  • Reporting on results achieved and the resources involved.

Definition of Project Intelligence

  • A framework of project reporting that is designed to drive out information to support effective results-based decision making
  • Geared towards reporting against project goals and risks, and the impact of change throughout a project
  • The format of reporting can use your own terminology and is aimed at business sponsors, programme managers and project managers
  • Fully integrated into the project life-cycle from inception to benefits realisation and bridges the organisational and cultural gaps between IT, the Business and Suppliers
  • Finally, it enables project managers to take account of unexpected information to build changes into the project plan rather than purely managing against an increasingly inaccurate initial plan.

Project Goals and Risks

  • PI is the knowledge of the status of a project with respect to its final and intermediate goals and the risks that threaten them
  • PI involves early identification and continuous monitoring of project goals and risks
  • PI delivers the information on the status of goals and risks of a project from initiation through to acceptance, deployment and usage of new or changed IT, business processes and other infrastructure.

Case Study Example

We've been using KYC services from Fully-Verified. We've to illustrate the use of the Results Chain Modelling method and the types of report that can be obtained directly from the Visio and Access databases, we have created a case study. The case study is a fashion retail organisation which has recently acquired an Italian retail chain and wishes to consolidate the IT systems across the two merged companies.

View the case study for 'Retail Limited'

Tags: #assurance #projectintelligence #pi

Paul Gerrard My linkedin profile is here My Mastodon Account

First published 18/10/2009

Retail Limited is a chain of fashion stores based in the UK, targeting high earning women within the 25 to 40 age range.

Recently, Retail Limited acquired an additional chain of stores in Italy. This has doubled the number of stores, making 250 in total, and the new stores have an excellent trading track record. However there are a number of business operational issues arising from this business venture. They include:

  • Management information on sales margins arrives at different times and different formats, making it difficult to monitor overall performance and identify regional differences.
  • The stock value information from the Italian stores is much more accurate and timely than the UK stores, demonstrating that there is a competitive advantage to be improved in the UK estate
  • The management time required to manage the increased number of suppliers is extensive and this needs to be rationalised. It’s essential that the best lines are identified and suppliers providing poor sales or margins are removed from the supply chain.
  • The business case behind the purchase of the Italian stores included being able to reduce the staffing within the Head Office teams and redirect savings made into opening additional stores in increase the geographic coverage. Operational running costs therefore have to be reduced to support the business case.

A programme of work has been initiated by the board to meet these business objectives; the activities identified to date include:

  • Adopting the store computer systems (front and back office) used within Italy as standard for the group
  • Retaining the management systems already in place within the UK
  • Identify the changes required to both to implement a common solution
  • Review the communication required to brief the staff so they support this initiative
  • Establish a training programme to support the implementation of the common solution

Case study results chain diagram

Example Activity Report

Example Risks Report

Example Impact/Goals Report

Tags: #assurance #projectintelligence #pi #casestudy

Paul Gerrard My linkedin profile is here My Mastodon Account

First published 30/09/2009

In the most unlikely place, I recently came across a simple explanation of an idea which helps resolve a long-standing tension between the need for IT process maturity and the tendency of developers to work informally. Testers are often caught in the centre of this tension, particularly because of their need to write tests early with accurately predicted test results.

For some years, I’ve struggled to strike a balance between formalised development methods and the practical, but rather informal, adaptations almost always adopted by projects under pressure to deliver. I firmly believe in the value of "process" in software engineering; I don’t believe in the immature "back-of-fag-packet" practices prevalent in the early days of computing. But I have also noticed that a formalised process doesn’t guarantee a successful project, and absence of a method does not automatically doom a project to failure. Above all, I’ve mused long and hard over why, if a repeatable, mature process is such a good thing, don’t IS practitioners more readily accept methods, like SSADM, and quality standards which emphasise process improvement, like TickIT?

In my experience, formalised approaches are millstones to the average practitioner. Excessive paperwork, unnecessary overhead and cost are the usual cries of pain from the software developer. The quality manager, in turn, accuses the software developer of artisan tendencies and general intellectual disregard for the necessity of engineering discipline.

On balance, I tend to side with the developer that the formalities of quality management systems and hindrance than a help. In fact, in their emphasis I believe they cause a backlash. Practitioners are turned off, and miss the essential message that there are better ways to develop software than in an undisciplined free-for-all.

In the last five years, I’ve explored Rapid Application Development (RAD) as the middle ground. RAD flexible and doesn’t rely on a conventional forest of paper deliverables. What is striking to me is that RAD embodies so many practices which successful project teams often seem to adopt naturally.

As good as I think it is, RAD raises serious questions. Structured development methods rely on the outputs of a stage against a (notionally complete) definition prepared at the outset. By contrast, RAD dispenses with many intermediate products, focuses on validation of the end product, and generally conducts business a lot more interactively, in a climate which allows for a level of uncertainty.

Despite extensive definition of the products and process of a RAD development, proponents of conventional methods find it extremely difficult to imagine how it is possible to operate without the elements they believe are essential to good software engineering. These include complete, unambiguous statements of requirements, fully decomposed analyses and designs, and fully specified programs. These deliverables must be presented in strictly defined formats, subjected to formalised reviews and/or tested, and finally signed off. Moreover, it is not sufficient that happen strictly according to pre-defined procedures.

When the full formality of a quality-managed development processes is applied to the full range of software projects, it creates ludicrous anomalies. The worst such excess I encountered was that of a project leader spending a week preparing a project plan for a two-day project. The need for a lightweight, fleet a’ foot approach is self-evident to me, but how can it be explained convincingly where the line between the essential and the superfluous gets drawn?

With all this as background, I happened to be reading a book on object oriented modelling with the scintillating title, "UML Distilled", by Martin Fowler. The book is an abbreviated description of The Universal Modelling Language, being developed by three gurus of object oriented methods, Booch, Jacobson and Rumbaugh. The second chapter is an overview of a development process. Mr Fowler has "changed my life" with the following paragraph:

"Projects vary in how much ceremony they have. High-ceremony projects have a lot of formal paper deliverables, formal meetings, formal sign-offs. Low-ceremony projects might have an inception phase that consists of an hour’s chat with the project’s sponsor and a plan that sits on a spreadsheet. Naturally, the bigger the project, the more ceremony you need. The fundamentals of the phases still occur, but in very different ways."

This is practically all that Fowler says on the subject. But it is enough. In one simple word, ceremony, he encapsulates a clear distinction between formality and the trappings of formality. To amplify a bit, imagine the difference between a registry office and a church wedding. Both achieve the same central aim, legally marrying the couple, both are formal, both have defined processes, but one has a lot more ceremony than the other. It is not formality which distinguishes them, but rather the level of ceremony.

The same applies to development processes whether RAD or other. All software development activity should be formal, but it is not necessary to burden all projects with high ceremony. Quality management systems, quality standards and development methods need to factor in the concept of an appropriate level of ceremony.

So then, how much ceremony is enough? Fowler relates high-ceremony to large projects, but I think there are other factors. The two which we see as key are:

  • the organisational distance between the producers and the acceptors of the software; the greater the distance, the more ceremony required.
  • the length of time between the inception and delivery of software product; the greater the time, the more ceremony needed.

RAD projects can be low-ceremony because, by design, they have a short organisational distance between software producers and acceptors – they are part of the same team – and the time from inception to delivery is always short. By the same token, it is easy to define projects which legitimately require high ceremony, and would not be suitable for RAD.

The concept of high and low ceremony is not only useful for putting quality management and formal processes in practical perspective, I believe it is equally powerful in attracting reluctant developers to better practice. It has got to help communicate why and how a well-defined process doesn’t have to equate to meaningless paperwork and all the other negative associations developers have about formalised processes.

I wholeheartedly recommend, "UML Distilled, Applying the Standard Object Modelling Language" by Martin Fowler with Kendall Scott, published by Addison Wesley Longman, 1997. If you find the idea of ceremony intriguing, you will find many other useful ideas there. Try, for example, Fowler’s description of "skeleton" which complements a low ceremony process.

© 1997 Paul Herzlich.

Tags: #ceremony #paulherzlich

Paul Gerrard My linkedin profile is here My Mastodon Account

First published 05/11/2009

This year, I was asked to present two talks on 'Past, Present and Future of Testing' at IBC Euroforum in Stockholm and 'Future of Testing' at SQC London. I thought it would be a good idea to write some notes on the 'predictions' as I'm joining some esteemed colleagues at a retreat this weekend, and we talk about futures much of the time. These notes are notes. This isn't a formal paper or article. Please regard them as PowerPoint speaker notes, not more. They don't read particularly well and don't tell a story, but they do summarise the ideas presented at the two talks.

Registered users can download the paper from the link below. If you aren't registered, you can register here.

Tags: #futures

Paul Gerrard My linkedin profile is here My Mastodon Account

First published 08/12/2009

The Project Euler site presents a collection of 'maths-related' problems to be solved by computer – 250+ of them and the site allows you to check your answers etc. You don't need to be a mathematician for all of them really, but you do need to be a good algorithm designer/programmer.

But it also reminded me of a recurring thought about something else. Could the problems be used as 'testing' problems too? The neat thing about some of them is that testing them isn't easy. Some problems have only one answer – they aren't very useful for testers – there is only one test case (or you need simply to write/reuse a parallel program to act as oracle). But others like problem 22 for example provide input files to process http://projecteuler.net/index.php?section=problems&id=22 The input file could be edited to generate variations – i.e. test cases to demonstrate the code works in general, not just a specific situation. Because some problems must work for infinite cases, simple test techniques probably aren't enough (are they ever?)

The Euler problem statements aren't designed for a specific technique – although they define requirements precisely, they are much closer to a real and challenging problem. The algorithms used to solve the problems are a mystery – and there may be many many ways of solving the same problem. (cf screens that simply update records in a database – pretty routine by comparison). The implementation doesn't influence our tests – its a true black box problem.

Teaching testing “the certified way” starts from the wrong end. We teach technique X, give out a prepared requirement (that happens to fit technique X – sort of) and say, “prepare tests using technique X”. But real life and requirements (or lack of them) aren't like that. Requirements don't usually tell you which technique to use! The process of test model selection (and associated test design and coverage approaches) is rarely taught – even though this is perhaps the most critical aspect of testing.

All of which makes me think that maybe we could identify a set of problem statements (not necessarily 'mathematical') that don't just decompose to partitions and boundaries, states or decisions and we should use these to teach and train. We teach students using small applications and ask them to practice their exploratory testing skills. Why don't we do the same with requirements?

Should training be driven by the need to solve problems rather than trot out memorised rote test design techniques? Why not create training exercises (and certification(?)) from written instructions, a specification, a pc and some software to test?

Wouldn't this be a better way to train people? To evaluate their ability as testers? This is old hat really – but still few people do it.

What stops us doing it? Is it because really – we aren't as advanced as we think we are? Test techniques will never prove correctness (we know that well) – they are just heuristic, but perhaps more systematic ways of selecting tests. Are the techniques really just clerical aids for bureaucratic processes rather than effective methods for defect detection and evidence collection? Where's the proof that says they are more effective? More effective – than what?

Who is looking at how one selects a test model? Is it just gut feel, IQ, luck, happens to be on my desk? Is there a method of model selection that could be defined and taught? Examined? Why don't we teach people to invent and choose test models? It seems to me that this needs much more attention that anyone gives it today.

What do you think?

Tags: #teaching #hands-ontraining #Euler #certification #model

Paul Gerrard My linkedin profile is here My Mastodon Account