How testing can be monitored? How we can measure the performance of testing?

First published 08/01/2010

Sarang Kulkarni posted an interesting question on the LinkeIn “senior Testing Professionals” discussion forum.

It's a question that has been posed endlessly by people funding testing and every tester has worried about the answer. I can't tell you how many discussions I've been involved in have revolved around this question. It's a fair question and it's hard one to answer. OK – why is it so hard to answer? Received wisdom has nothing to say except quantiy of testing is good (in some way) and that thoroughness (by some mysterious measure) are morelikely to improve quality. Unfortunately, testers do no usually write or change software – only developers have an influence over quality. All in all the quality of testing has the most indirect relationship to quality, Measure perfomance? Forget it.

My response is based on a different view of what testing is for. Testing isn't about finding bugs so others can fix them. That's like saying literary criticsm is about finding typos, or battlefield medicine is about finding bulletholes in people or banking is about counting money. Not quite.

Testing exists to collect information about a system's behaviour (on the analysts drawing board, as components, usable system or integrated whole) and calibrating that in some (usually subjective) way against someone else's expectations and communicating that to stakeholders. It's as simple and as complicated than that.

Simple because testing exists to collect and communicate information for others to make a decision. more complicated because virtually everything in software, systems, organisation and culture block this most basic objective. but hey, that's what makes a tester's life interesting.

If our role as testers is collect and disseminate information for others to make decisions, then it must be those decision makers who must just the completeness and quality of our work – i.e. performance. Who else can make that judgement – and judgement it must be because there are no metrics that can reasonably be used to evaluate our performance.

The problem is, our 'performance' is influenced by the quality (good or bad) of the systems we test, the ease by which we can obtain behavioural information, the subjective view of the depth of the testing we do, the cricitality of the systems we test and the pressures on, mentality of and even frame of mind of the people we test on behalf of.

What meaning could be assigned to any meausre once cares to use? Silly.

Performance shamformance. What's the difference?

The best we can do is ask our stakeholders – the people we test on behalf of – what do they thing we are doing and how well are we doing it? Subjective yes. Qualititative yes. Helpful – yes. If...

The challenge to testers is to get stakeholders to articulate what exactly they want from us before we test and then to give us their best assessment of how we meet those objectives. Anything else is mere fluff.

Tags: #ALF

Paul Gerrard My linkedin profile is here My Mastodon Account