Paul Gerrard

My experiences in the Test Engineering business; opinions, definitions and occasional polemics. Many have been rewritten to restore their original content.

First published 08/01/2010

Sarang Kulkarni posted an interesting question on the LinkeIn “senior Testing Professionals” discussion forum.

It's a question that has been posed endlessly by people funding testing and every tester has worried about the answer. I can't tell you how many discussions I've been involved in have revolved around this question. It's a fair question and it's hard one to answer. OK – why is it so hard to answer? Received wisdom has nothing to say except quantiy of testing is good (in some way) and that thoroughness (by some mysterious measure) are morelikely to improve quality. Unfortunately, testers do no usually write or change software – only developers have an influence over quality. All in all the quality of testing has the most indirect relationship to quality, Measure perfomance? Forget it.

My response is based on a different view of what testing is for. Testing isn't about finding bugs so others can fix them. That's like saying literary criticsm is about finding typos, or battlefield medicine is about finding bulletholes in people or banking is about counting money. Not quite.

Testing exists to collect information about a system's behaviour (on the analysts drawing board, as components, usable system or integrated whole) and calibrating that in some (usually subjective) way against someone else's expectations and communicating that to stakeholders. It's as simple and as complicated than that.

Simple because testing exists to collect and communicate information for others to make a decision. more complicated because virtually everything in software, systems, organisation and culture block this most basic objective. but hey, that's what makes a tester's life interesting.

If our role as testers is collect and disseminate information for others to make decisions, then it must be those decision makers who must just the completeness and quality of our work – i.e. performance. Who else can make that judgement – and judgement it must be because there are no metrics that can reasonably be used to evaluate our performance.

The problem is, our 'performance' is influenced by the quality (good or bad) of the systems we test, the ease by which we can obtain behavioural information, the subjective view of the depth of the testing we do, the cricitality of the systems we test and the pressures on, mentality of and even frame of mind of the people we test on behalf of.

What meaning could be assigned to any meausre once cares to use? Silly.

Performance shamformance. What's the difference?

The best we can do is ask our stakeholders – the people we test on behalf of – what do they thing we are doing and how well are we doing it? Subjective yes. Qualititative yes. Helpful – yes. If...

The challenge to testers is to get stakeholders to articulate what exactly they want from us before we test and then to give us their best assessment of how we meet those objectives. Anything else is mere fluff.

Tags: #ALF

Paul Gerrard My linkedin profile is here My Mastodon Account

First published 08/12/2007

With most humble apologies to Rudyard Kipling...

If you can test when tests are always failing, When failing tests are sometimes blamed on you, If you can trace a path when dev teams doubt that, Defend your test and reason with them too.

If you can test and not be tired by testing, Be reviewed, and give reviewers what they seek, Or, being challenged, take their comments kindly, And don't resist, just make the changes that you need.

If you find bugs then make them not your master; If you use tools – but not make tools your aim; If you can meet with marketing and salesmen And treat those two stakeholders just the same.

If you can give bad news with sincerity And not be swayed by words that sound insane Or watch the tests you gave your life to de-scoped, And run all test scripts from step one again.

If you can analyse all the known defects Tell it how it is, be fair and not be crass; And fail, fail, fail, fail, fail repeatedly; And never give up hope for that one pass.

If you explore and use imagination To run those tests on features never done, And keep going when little code is working And try that last test boundary plus one.

If you can talk with managers and users, And work with dev and analysts with ease, If you can make them see your test objective, Then you'll see their risks and priorities.

If you can get your automation working With twelve hours manual testing done in one - Your final test report will make some reading And without doubt – a Tester you'll become!

By the way, to read If... by Rudyard Kipling you can see it here: http://www.kipling.org.uk/poems_if.htm

Tags: #kipling #poetry #if...

Paul Gerrard My linkedin profile is here My Mastodon Account

First published 13/10/2010

It seems like Prezi is all the rage. As a frequent presenter, I thought I'd have a play. So I took some of the early text from the Tester's Pocketbook and created my first Prezi. Not half bad. I'm not sure it's a revolution, but sometimes, anything is better than Powerpoint.

 



Tags: #testaxioms #Prezi

Paul Gerrard My linkedin profile is here My Mastodon Account

First published 29/01/2010

What are the emerging testing practices that have most promise? What have you tried and works for you? Importantly, what did not work?

Paul considers what “innovative” really means and looks at three emerging approaches: “virtualisation and the cloud”, “behaviour-driven development” and “crowdsourced testing”.

This session attempts to separate the hype from the reality and provide some pointers for the future.

This talk was presented at the Fourth Test Management Summit in London on 27 January 2010.

Registered users can download the presentation from the link below. If you aren't registered, you can register here.

Tags: #innovation

Paul Gerrard My linkedin profile is here My Mastodon Account

First published 09/12/2009

A post on the Software Testing Club, Is Testing Last in Line? seems oh so familiar to complaints (if that that is they are) heard for as long as I've been in software (and I'm in my 29th year).

I think all of the responses to the blog are reasonable – but the underlying assumption in all (most) of them is that the tester is responsible for getting:

a) involved early b) involved heavily

Now of course, there are arguments that we have all had drummed into us since the 1970s that can be used to support both of these aims. But they tend to exclude the viewpoint of the stakeholder. (To me a stakeholder is anyone who is interested in the outcome of testing).

Why were we not automatically involved earlier, if it is so self-evident we should be?

Was no one interested in what we could tell them (given access to whatever products were being produced) at the time? Do stakeholders think we produce so little of interest?

Why don't we get the time, budget, people and resources to test as much as we could?

The same challenges apply. And the conclusions are uncomfortable.

By and large our stakeholders are not stupid. If it is self-evident to us that we should be involved earlier, more and more often, why isn't it obvious to them? Howling at the moon won't help us.

Surely we need to engage with stakeholders:

What exactly do they want from us? When? In what format? How frequently? How flexibly? How thoroughly? and so on.

Testing is 'last in line' with good reason. We dont engage, we don't articulate what we do well enough, we provide data, not information, we provide it late (self-fulfilling prophecy time here), we focus on bugs rather than business goals, we write documents when we need to deliver intelligence, we find bugs, when we need to provide evidence of success, we refuse to judge and give our stakeholders nothing when they need our support most etc. etc.

Every ten years or so, I get depressed when the next “big thing” arrives and it appears that, well... it's the same old same old with a different label. New technologies offer new opportunities I guess and after reading a text book or two I get on board.

But for as long as I can remember, testers have been complaining that testing is 'last in line' and they BLAME OTHERS. Surely it's time to look at how WE behave as testers? Surely, we should look at what we are doing wrong rather than blame others?

Tags: #prioritisation #lastinline

Paul Gerrard My linkedin profile is here My Mastodon Account

First published 26/05/2011

Test Assurance is an evolving discipline that concerns senior testing professionals. Unfortunately, there isn't an industry-wide definition of it. Even if there was one, it would probably be far too vague.

This group aims to provide a focus for people who are active in the space to share knowledge, but also to senior folk who are looking for a potential career 'upgrade path'. By and large, test assurance pros are expert in tests, but sit above the fray. Their role is to assess, review, audit, understand, challenge testing but not usually to conduct it. That is (as was written in one of my TA briefs)...

Test Assurance has no responsibility for delivery.

TA as an engagement might be a full time internal role in one project or a programme of projects, engaged from the beginning and with a scope of influence across requirements through to acceptance testing, performed internally or by suppliers.

A variation of this role would be to provide oversight of a project from an external point of view. In this case, Test Assurance might report to the chair of a programme management board – often a business leader.

But an alternative engagement might be as a testing trouble-shooter where a (usually large) project has a 'testing problem'. A rapid review and report, with recommendations, presented to at least project board level is the norm.

There are wide variations on these themes.

So my question in this discussion is – what is your experience/view of Test Assurance? Let's hear your comments – perhaps we can create a TA scope or terms of reference so we can define the group's focus.

Here is the link: http://www.linkedin.com/groups/Test-Assurance-3926259

Do join us.

Tags: #testassurance #linkedin

Paul Gerrard My linkedin profile is here My Mastodon Account

First published 25/02/2008

There's been a lively discussion on axioms of testing and the subject of schools came up in that conversation. I'm not a member of any particular school and if people like to be part of one – good for them. I think discussion of schools is a distraction and doesn't help the axioms debate at all. I do suggest that axioms are context- and school- independent – so with respect to schools of testing, I had better explain my position here.

My good friend Neil Thompson has eloquently illustrated an obvious problem of being a member of a school. The dialogue reminded me of a Monty Python sketch – I can't think which one but “Dialectical Materialsm” came into it somewhere. I think it was an argument between Marx, Engels and other philosopers.

Anyway, Neil's hilarious example is very well pitched. I'd like to set out my position with respect to schools in this post.

I'm a consultant. I don't do targeted marketing so I don't choose my clients, my clients choose me. In the last 18 months or so, I've had widely varying engagements including: a Test Assurance Manager role on a $200m SAP project with an oil company, consulted for a £1bn+ Government Infrastructure/safety-related project, a medium-sized financial services company whose business is supported by systems developed and supported by a consortium of small companies and a software house providing custom software solutions to banks.

Each organisation and project represents a different challenge for testing. There is a huge spread of organisational cultures, busines and technical environments and of course, scale. My projects are a tiny sample of the huge variation in contexts for which test approaches must be designed, but it's easy to say:

  • It is quite obvious that no single approach or pre-meditated combination of test methods, tools, processes, heuristics etc. can support all contexts.
  • Since all software projects are unique, the contexts are unique so off-the-shelf approaches must be customised or designed from scratch.
There are some fairly well-defined approaches to testing that have been promoted and used over the years, and one can identify some stereotypes in the way that Bret Pettichord has in his useful talk 'Schools of Software Testing'. It seems to me that Bret's characterisation of schools must be at the same time, a characterisation of approaches. The ethos of a school defines the approach they promote – and vice-versa. It's not obvious whether schools predate their approaches or the approaches predate the school. I'm not sure it matters – but it varies.

But I think the difference in approaches reflect primarily a difference in emphasis. The Agile Manifesto articulated the values and preferences of that community very clearly but there are a range of agile approaches and all have merit.

Which leaves us exacly where?

The so-called schools of testing appear to me to limit its members' thinking. For the members of a school, the ethos of the schools represents a set of preferences for the types of projects and contexts that they would choose to work in. Being a member of a school also says something about its members when they market their services or are invited to join a project: “I prefer (or possibly demand) to work in this way and am qualified to do so (by some means)”.

In this respect, for individuals or organisations who align themselves with schools, the school ethos also represents a brand.

I am not a partner with any one test tool vendor because I value my independence. I do not limit myself to working only one way because my client projects don't come neatly packaged as one type or another. I have never known a project be adequately supported by one uncustomised, pre-packaged approach. Some people need to belong to a school or to be categorised or branded. I don't.

So, I'm not interested in testing schools, but I fully respect the wishes of people who want to be part of one.



Tags: #ALF

Paul Gerrard My linkedin profile is here My Mastodon Account

First published 28/05/2007

I was in Nieuwegein, Holland last week giving my ERP Lessons Learned Talk as part of the EuroSTAR – Testnet mini-event. After the presentation, I was talking to people afterwards. The conversation came around to test environments, and how many you need. One of the big issues in ERP implementations is the need for multiple, expensive test environments. Some projects have environments running into double figures (and I'm not talking about desktop environments for developers, here). Well, my good friend said, his project currently has 27 environments, and that still isn't enough for what they want to do. 27 didn't didn't include the test environments required for their interfacing systems to test. It's a massive project,needless to say, but TWENTY SEVEN? The mind boggles.

Is this a record? Can you beat that? I'd be delighted to hear from you if you can!

Tags: #Eurostar #TestNet

Paul Gerrard My linkedin profile is here My Mastodon Account

First published 10/12/2010

I am proud and honoured to have received the Eurostar European Testing Excellence award for 2010. I’m particularly grateful to Geoff Thompson who proposed me, Graham Thomas who encouraged Geoff to put the effort in and my business partner Susan Windsor for putting up with me. Of course, I would like to thank the friends, colleagues and customers who provided references for the submission. Needless to say, I also owe a huge debt to my wife Julia and family.

To be singled out for the award is very special but I want to emphasise that I am part of a large community of testers. It is an honour to be associated with such a group of people in the UK, Europe and worldwide who are so generous with their time to challenge and to share their knowledge. In this respect, Testers seem to me to be unique in the IT industry.

Thank-you all once again.

Tags: #Eurostar #testingexcellenceaward #awards

Paul Gerrard My linkedin profile is here My Mastodon Account

First published 06/11/2009

Ten years ago, the Internet was a relatively small, closed networkused by defence and academic organisations in the US. When Mosaic, a graphical Web browserappeared in 1994 and became widely-available, the explosive popularity of the Net began,and continues today. In August 1999 the number of people connected to the net was 195m andthis is expected to be 250m by the end of the millennium. In the UK around 12m people or20% of the population of all ages will have access when the new Millennium dawns. If youhave a PC and modem the cost of connection is the cost of a local telephone call.

Because the on-line market is world-wide, unrestricted, vast (andstill growing), the emergence of electronic commerce as a new way of conducting businessgathers momentum. E-commerce has been described as the last gold-rush of the millennium.Since the cost of entry into the e-commerce marketplace is so low, and the potentialrewards so high, business to business and business-consumer vendors are scrambling tocapture maximum market share.

Although the cost of entry into the market is low, the risk offailure in the marketplace is potentially very high. The web sites of traditional vendorswith strong brand names have not automatically succeeded, and there have been some notablefailures. Many of the largest e-commerce sites were completely unknown start-up companiesthree years ago. E-commerce systems have massive potential, but with new technology comenew risks, and testing must change to meet the needs of the new paradigm. What are therisks of the e-commerce systems?

The typical e-commerce system is a three-tiered client/serverenvironment, database (often legacy system) servers working with application or businessservers, fronted by web servers. Given this basic structure, many other special purposeservers may also be involved: firewalls, distributed object, transaction, authentication,credit card verification and payment servers are often part of the architecture. The Webis the vehicle by which the promise of client/server will finally be realised.

Many of the risks faced by e-commerce developers are the same as forclient/server, but there are important differences. Firstly, the pace of development onthe web is incredibly quick. 'Web-time' describes the hustle that is required tocreate and maintain momentum. Few systems are documented adequately. The time from a newidea and deployment onto the Web may only be a few weeks. Enhancements may be thought ofin the morning to be deployed in the afternoon. Some sites, for example a TV news channelsite, must provide constantly changing, but up to date content 24 hours a day, 365 days ayear.

You have no control over the users who visit and use your site. Yourusers may access your site with any one of 35 different browsers or other web devices(will your site work with them all?). There is no limit to how many people can access yoursite at once (will your site crash under the load?). Users will not be trained, many maynot speak your language, some may be disabled, some blind (will they find your siteusable, fast, useful?). Some of your users will be crooks (can your site withstand ahacker's attack?). Some of your users may be under-age, (are you selling alcohol tominors?) Whether you are in retail or not, one way of looking at the way people use youre-commerce site is to compare it with a traditional retail store.

Anyone can visit your store, but if your doors are shut (the site isdown); if the queue to pay is too long; if the customer cannot pay the way they want to;if your price list is incomplete, out of date, or impossible to use, your customers willgo elsewhere. E-commerce site designers must design to provide their users with the mostrelaxed, efficient and effective web-experience possible. E-commerce site testers, mustget inside the heads of users and create test scenarios that match reality.

What are the imperatives for e-commerce testers? To adopt a rapidresponse attitude. To work closely with marketeers, designers, programmers and of coursereal users to understand both user needs and the technical risks to be addressed intesting. To have a flexible test process having perhaps 20 different test types that covereach of the most likely problems. To automate as much testing as possible.

Whether home-grown or proprietary, the essentials tools are testdata and transaction design; test execution using the programmer and user interfaces;incident management and control to ensure the right problems get fixed in the right order.Additional tools to validate HTML and links, measure download time and generate loads areall necessary. To keep pace with development, wholly manual testing is no longer anoption. The range of tools is large required but most are now available.

Paul Gerrard, 12 September 1999.

Tags: #e-commerce #Risks

Paul Gerrard My linkedin profile is here My Mastodon Account