Paul Gerrard

My experiences, opinions in the Test Engineering business. I am republishing/rewriting old blogs from time to time.

First published 05/11/2010

I attended the Unicom “Next Generation Testing Conference” this week and there was lots of good discussion on the difference between Agile and Waterfall.

Afterwards (isn’t it always afterwards!) I thought of something I wished I’d raised at the time, so I thought I’d write a blog by way of follow up. Now I don’t do blogs, although I’ve been meaning to for ages. So this is a new, and hopefully pleasant, experience for me.

I’ve always been concerned about how to contract with a supplier in an agile way. Supplier management is a specific area of interest for me. I’ve listened to many presentations and case studies on this, but frankly haven’t been convinced yet. However, I’ve had one of those light bulb moments. We spend so much time talking about how to improve the delivery and predictability of systems and yet our industry has a bunch of suppliers whose business depends upon us not getting requirements right!

This isn’t their fault though as the purchasing process most companies go through encourages and rewards this behaviour. In a competitive bidding process where price is sensitive, all bidders are seeking to give a keen price for their interpretation of the requirements, knowing that requirements are typically inconsistent and complete. This allows them to bid low, maybe at cost or even lower, and then make their profit on the re-work. I’d always thought re-work was around 30% but Tom Gilb gave figures that show it can be much higher than that, so that’s a good profit incentive isn’t it.

So, here we all are, seeking to reduce spend putting pressure on daily rates, etc. We’re looking at the wrong dimension here! There is a much bigger prize!

Can we reduce the cost of purchasing, reduce the cost of re-work and meet our goals of predictable system delivery. Now, (and some of you will be thinking finally!) I’m convinced agile can help achieve this, but is any customer out there enlightened enough to realise this? Or is it more important to them to maintaining the status quo and avoid change?

Tags: #agile #contracts #suppliers #test

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account

First published 28/05/2007

I was in Nieuwegein, Holland last week giving my ERP Lessons Learned Talk as part of the EuroSTAR – Testnet mini-event. After the presentation, I was talking to people afterwards. The conversation came around to test environments, and how many you need. One of the big issues in ERP implementations is the need for multiple, expensive test environments. Some projects have environments running into double figures (and I'm not talking about desktop environments for developers, here). Well, my good friend said, his project currently has 27 environments, and that still isn't enough for what they want to do. 27 didn't didn't include the test environments required for their interfacing systems to test. It's a massive project,needless to say, but TWENTY SEVEN? The mind boggles.

Is this a record? Can you beat that? I'd be delighted to hear from you if you can!

Tags: #Eurostar #TestNet

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account

First published 28/04/2006

I've been asked to present the closing keynote at this year's Eurostar Conference in Manchester on December 7th. Here's the abstract: When it comes to improving the capabilities of our testers, if you believe the training providers brochures, you might think that a few days training in a classroom is enough to give a tester all the skills required to succeed. But it is obvious that to achieve mastery, it can take months or years to acquire the full range of technical and inter-personal skills required. Based on my experience as a rowing coach, this keynote describes how an athletic training programme is run and compares that with the way most testers are developed. An athlete will have a different training regime for the different periods of the year and coaching, mentoring, inspiring and testing are all key activities of the coach. Training consists of technical drills, strength, speed, endurance and team work. Of course a tester must spend most of their time actually doing their job, but there are many opportunities for evaluation and training to occur even in a busy schedule. Developing tester capability requires a methodical, humane approach with realistic goals, focused training, regular evaluation, feedback and coaching as well as on-the-job experience. You can see the presentation here: multi-page HTML file | Powerpoint Slide Show

 

I originally created this presentation for the BCS SIGIST meeting on the Ides of March.



Tags: #developingtesters #athletes #Rowing #TesterDevelopment

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account

First published 06/11/2009

Tools for Testing Web Based Systems

Selecting and Implementing a CAST Tool

Tools Selection and Implementation (PDF) What can test execution tools do for you? The main stages of tool selection and implementation, and a warning: Success may not mean what you think!

Registered users can downaload the paper from the link below. If you aren't registered, you can register here.

Tags: #testautomation #cast

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account

First published 29/01/2010

Last week I presented a talk called “Advancing Testing Using Axioms” at the First IIR Testing Forum in Helsinki, Finland.

Test Axioms have been formulated as a context-neutral set of rules for testing systems. Because they represent the critical thinking processes required to test any system, there are clear opportunities to advance the practice of testing using them.

The talk introduces “The First Equation of Testing” and discusses opportunities to use the Axioms to support test strategy development, test assessment and improvement and suggests that a tester skills framework could be an interesting by-product of the Axioms. Finally, “The Quantum Theory of Testing” is introduced.

Go to the web page for this talk.

Tags: #testaxioms #futures #advancingtesting

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account

First published 25/02/2008

There's been a lively discussion on axioms of testing and the subject of schools came up in that conversation. I'm not a member of any particular school and if people like to be part of one – good for them. I think discussion of schools is a distraction and doesn't help the axioms debate at all. I do suggest that axioms are context- and school- independent – so with respect to schools of testing, I had better explain my position here.

My good friend Neil Thompson has eloquently illustrated an obvious problem of being a member of a school. The dialogue reminded me of a Monty Python sketch – I can't think which one but “Dialectical Materialsm” came into it somewhere. I think it was an argument between Marx, Engels and other philosopers.

Anyway, Neil's hilarious example is very well pitched. I'd like to set out my position with respect to schools in this post.

I'm a consultant. I don't do targeted marketing so I don't choose my clients, my clients choose me. In the last 18 months or so, I've had widely varying engagements including: a Test Assurance Manager role on a $200m SAP project with an oil company, consulted for a £1bn+ Government Infrastructure/safety-related project, a medium-sized financial services company whose business is supported by systems developed and supported by a consortium of small companies and a software house providing custom software solutions to banks.

Each organisation and project represents a different challenge for testing. There is a huge spread of organisational cultures, busines and technical environments and of course, scale. My projects are a tiny sample of the huge variation in contexts for which test approaches must be designed, but it's easy to say:

  • It is quite obvious that no single approach or pre-meditated combination of test methods, tools, processes, heuristics etc. can support all contexts.
  • Since all software projects are unique, the contexts are unique so off-the-shelf approaches must be customised or designed from scratch.
There are some fairly well-defined approaches to testing that have been promoted and used over the years, and one can identify some stereotypes in the way that Bret Pettichord has in his useful talk 'Schools of Software Testing'. It seems to me that Bret's characterisation of schools must be at the same time, a characterisation of approaches. The ethos of a school defines the approach they promote – and vice-versa. It's not obvious whether schools predate their approaches or the approaches predate the school. I'm not sure it matters – but it varies.

But I think the difference in approaches reflect primarily a difference in emphasis. The Agile Manifesto articulated the values and preferences of that community very clearly but there are a range of agile approaches and all have merit.

Which leaves us exacly where?

The so-called schools of testing appear to me to limit its members' thinking. For the members of a school, the ethos of the schools represents a set of preferences for the types of projects and contexts that they would choose to work in. Being a member of a school also says something about its members when they market their services or are invited to join a project: “I prefer (or possibly demand) to work in this way and am qualified to do so (by some means)”.

In this respect, for individuals or organisations who align themselves with schools, the school ethos also represents a brand.

I am not a partner with any one test tool vendor because I value my independence. I do not limit myself to working only one way because my client projects don't come neatly packaged as one type or another. I have never known a project be adequately supported by one uncustomised, pre-packaged approach. Some people need to belong to a school or to be categorised or branded. I don't.

So, I'm not interested in testing schools, but I fully respect the wishes of people who want to be part of one.



Tags: #ALF

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account

First published 06/11/2009

This paper, written by Paul Gerrard was presented to the EuroSTAR Conference in London, November 1995 and won the prize for 'Best Presentation'.

Client/Server (C/S) technology is being taken up at an incredible rate. Almost every development organisation has incorporated C/S as part of their IT strategy. It appears that C/S will be the dominant architecture taking IT into the next millennium. Although C/S technology is gaining acceptance rapidly and development organisations get better at building such systems, performance issues remain as an outstanding risk when a system meets its functional requirements.

This paper sets out the reasons why system performance is a risk to the success of C/S projects. A process has been outlined which the authors have used to plan, prepare and execute automated performance tests. The principles involved in organising a performance test have been set out and an overview of the tools and techniques that can be used for testing two and three-tier C/S systems presented.

In planning, preparing and executing performance tests, there are several aspects of the task which can cause difficulties. The problems that are encountered most often relate to the stability of the software and the test environment. Unfortunately, testers are often required to work with a shared environment with software that is imperfect or unfinished. These issues are discussed and some practical guidelines are proposed.

Download the paper from here.



Tags: #performancetesting #client/server

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account

First published 13/07/2011

It seems to me that, to date, perceptions of exploration in communities that don't practice it have always been that it is appropriate only for document- and planning-free contexts. It's not been helped by the emphasis that is placed on these contexts by the folk who practice and advocate exploration. Needless to say, the certification schemes have made the same assumption and promote the same misconception.

But I'm sure that things will change soon. Agile is mainstream and a generation of developers, product owners and testers who might have known no other approach are influencing at a more senior level. Story-based approaches to define requirements or to test existing requirements 'by example' and drive acceptance (as well as being a source of tests for developers) are gaining followers steadily. The whole notion of story telling/writing and exampling is to ask and answer 'what if' questions of requirements, of systems, of thinking.

There's always been an appetite for less test documentation but rarely the trust – at least in testers (and managers) brought up in process and standards-based environments. I think the structured story formats that are gaining popularity in Agile environments are attracting stakeholders, users, business analysts, developers – and testers. Stories are not new – users normally tell stories to communicate their initial views on requirements to business analysts. Business analysts have always known the value of stories in eliciting, confirming and challenging the thinking around requirements.

The 'just-enough' formality of business stories provides an ideal communication medium between users/business analysts and testers. Business analysts and users understand stories in business language. The structure of scenarios (given/when/then etc.) maps directly to the (pre-conditions/steps/post-conditions) view of a formal test case. But this format also provides a concise way of capturing exploratory tests.

The conciseness and universality of stories might provide the 'just enough' documentation that allows folk who need documentation to explore with confidence.

I'll introduce some ideas for exploratory test capture in my next post.

Tags: #exploratorytesting #businessstories #BusinessStoryManager

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account

First published 28/05/2007

As a matter of record, I wanted to post a note on my involvement with the testing certification scheme best known in the UK (and many other countries) as the ISEB Testing Certificate Scheme. I want to post some other messages commenting on the ISEB, ISTQB and perhaps other schemes too, so a bit of background might be useful.

In 1997, a small group of people in the UK started to discuss the possibility of establishing a testing certification scheme. At that time, Dorothy Graham and I were probably the most prominent. There was some interest in the US too, I recall, and I briefly set up a page on the Evolutif website promoting the idea of a joint European/US scheme, and asking for expressions of interest in starting a group to formulate a structure, a syllabus, an examination and so on. Not very much came of that, but Dot and I in particular, drafted an outline syllabus which was just a list of topics, about a page long.

The Europe/US collaboration didn't seem to be going anywhere so we decided to start it in the UK only to begin with. At the same time, we had been talking to people at ISEB who seemed interested in administering the certification scheme itself. At that time ISEB was a certifying organisation having charitable status, independent of the British Computer Society (BCS). That year, ISEB decided to merge into the BCS. ISEB still had it's own identity and brand, but was a subsidiary of BCS from then on.

ISEB, having experience of running several schemes for several years (whereas we had no experience at all) suggested we form a certification 'board' with a chair, terms of reference and constitution. The first meeting of the new board took place on 14th January 1998. I became the first Chair of the board. I still have the Terms of Reference for the board, dated 17 May 1998. Here are the objectives of the scheme and the board extracted from that document:

Objectives of the Qualification • To gain recognition for testing as an essential and professional software engineering specialisation by industry. • Through the BCS Professional Development Scheme and the Industry Structure Model, provide a standard framework for the development of testers' careers. • To enable professionally qualified testers to be recognised by employers, customers and peers, and raise the profile of testers. • To promote consistent and good testing practice within all software engineering disciplines. • To identify testing topics that are relevant and of value to industry • To enable software suppliers to hire certified testers and thereby gain commercial advantage over their competitors by advertising their tester recruitment policy. • To provide an opportunity for testers or those with an interest in testing to acquire an industry recognised qualification in the subject.

Objectives of the Certification Board The Certification Board aims to deliver a syllabus and administrative infrastructure for a qualification in software testing which is useful and commercially viable. • To be useful it must be sufficiently relevant, practical, thorough and quality-oriented so it will be recognised by IT employers (whether in-house developers or commercial software suppliers) to differentiate amongst prospective and current staff; it will then be viewed as an essential qualification to attain by those staff. • To be commercially viable it must be brought to the attention of all of its potential customers and must seem to them to represent good value for money at a price that meets ISEB's financial objectives.

The Syllabus evolved and was agreed by the summer. The first course and examination took place on 20-22 October 1998, and the successful candidates were formally awarded their certificates at the December 1998 SIGIST meeting in London. In the same month, I resigned as Chair but remained on the board. I subsequently submitted my own training materials for accreditation.

Since the scheme started, over 36,000 Foundation examinations have been taken with a pass rate of about 90%. Since 2002 more than 2,500 Practitioner exams have been taken, with a relatively modest pass rate of approximately 60%.

The International Software Testing Qualificaton Board (ISTQB) was established in 2002. This group aims to establish a truly international scheme and now has regional boards in 33 countries. ISEB have used the ISTQB Foundation syllabus since 2004, but continue to use their own Practitioner syllabus. ISTQB are developing a new Practitioner level syllabus to be launched soon, but ISEB have already publicised their intention to launch their own Practitioner syllabus too. It's not clear yet what the current ISEB accredited training providers will do with TWO schemes. It isn't obvious what the market will think of two schemes either.

Interesting times lie ahead.

Tags: #istqb #iseb #TesterCertification

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account

First published 29/06/2011

All testing is exploratory. There are quite a few definitions of Exploratory Testing, but the easiest to work with is the definition on Cem Kaner's site.

“Exploratory software testing is a style of software testing that emphasizes the personal freedom and responsibility of the individual tester to continually optimize the value of her work by treating test-related learning, test design, test execution, and test result interpretation as mutually supportive activities that run in parallel throughout the project.”

The usual assumption is that this style of testing applies to software that exists and where the knowledge of software behaviour is primarily to be gathered from exploring the software itself.

But I'd like to posit that if one takes the view:

  • All the artefacts of a project are subjected to testing
  • Testers test systems, not just software in isolation
  • The learning process is a group activity that includes users, analysts, process and software designers, developers, implementers, operations staff, system administrators, testers, users, trainers, stakeholders and the senior user most if not all of who need to test, interpret and make decisions
  • All have their own objectives and learning challenges and use exploration to overcome them.
... then all of the activities from requirements elicitation onwards use testing and exploration.

Exploration wasn't invented by the Romans, but the word explorare is Latin. It's hard to see how the human race could populate the entire planet without doing a little exploration. The writings of Plato and Socrates are documented explorations of ideas.

Exploration is in many ways like play, but requires a perspicacious thought process. Testing is primarily driven by the tester's ability to create appropriate and useful test models. An individual may hold all the knowledge necessary to test in their heads whilst collecting, absorbing and interpreting information from a variety of sources including the system under test. Teams may operate in a similar way, but often need coordination and control and are accountable to stakeholders who need planning, script and/or test log documentation. Whatever. At some point before, during and/or after the 'test' they take guidance from their stakeholders and feedback the information they gather and adjust their activity where necessary.

Testing requires an interaction between the tester, their sources of knowledge and the object(s) under test. The source of knowledge may be people, documents or the system under test. The source of knowledge provides insights as to what to test and provides our oracle of expectations. The “exploration” is mostly of the source of knowledge. The execution of tests and consideration of outcomes confirms our beliefs – or not.

The real action takes place in the head of the tester. Consider the point where a tester reflects on what they have just learned. They may replay events in their mind and the “what just happened?” question triggers one of those aha! moments. Something isn't quite right. So they retrace their steps, reproduce the experience, look at variations in the path of thinking and posit challenges to what they test. They question and repeatedly ask, “what if?”

Of course, the scenario above could apply to testing some software, but it could just as easily apply to a review of requirements, a design or some documented test cases. The thought processes are the same. The actual outcome of a “what if?” question might be a test of some software. But it could also be a search in code for a variable, a step-through of printed code listing or a decision table in a requirement or a state transition diagram. The outcome of the activity might be some knowledge, more questions to ask or some written or remembered tests that will be used to question some software sooner or later.

This is an exploratory post, by the way :O)



Tags: #exploratorytesting

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account