Is anyone researching HOW to choose a test model?
First published 08/12/2009
The Project Euler site presents a collection of 'maths-related' problems to be solved by computer – 250+ of them and the site allows you to check your answers etc. You don't need to be a mathematician for all of them really, but you do need to be a good algorithm designer/programmer.
But it also reminded me of a recurring thought about something else. Could the problems be used as 'testing' problems too? The neat thing about some of them is that testing them isn't easy. Some problems have only one answer – they aren't very useful for testers – there is only one test case (or you need simply to write/reuse a parallel program to act as oracle). But others like problem 22 for example provide input files to process http://projecteuler.net/index.php?section=problems&id=22 The input file could be edited to generate variations – i.e. test cases to demonstrate the code works in general, not just a specific situation. Because some problems must work for infinite cases, simple test techniques probably aren't enough (are they ever?)
The Euler problem statements aren't designed for a specific technique – although they define requirements precisely, they are much closer to a real and challenging problem. The algorithms used to solve the problems are a mystery – and there may be many many ways of solving the same problem. (cf screens that simply update records in a database – pretty routine by comparison). The implementation doesn't influence our tests – its a true black box problem.
Teaching testing “the certified way” starts from the wrong end. We teach technique X, give out a prepared requirement (that happens to fit technique X – sort of) and say, “prepare tests using technique X”. But real life and requirements (or lack of them) aren't like that. Requirements don't usually tell you which technique to use! The process of test model selection (and associated test design and coverage approaches) is rarely taught – even though this is perhaps the most critical aspect of testing.
All of which makes me think that maybe we could identify a set of problem statements (not necessarily 'mathematical') that don't just decompose to partitions and boundaries, states or decisions and we should use these to teach and train. We teach students using small applications and ask them to practice their exploratory testing skills. Why don't we do the same with requirements?
Should training be driven by the need to solve problems rather than trot out memorised rote test design techniques? Why not create training exercises (and certification(?)) from written instructions, a specification, a pc and some software to test?
Wouldn't this be a better way to train people? To evaluate their ability as testers? This is old hat really – but still few people do it.
What stops us doing it? Is it because really – we aren't as advanced as we think we are? Test techniques will never prove correctness (we know that well) – they are just heuristic, but perhaps more systematic ways of selecting tests. Are the techniques really just clerical aids for bureaucratic processes rather than effective methods for defect detection and evidence collection? Where's the proof that says they are more effective? More effective – than what?
Who is looking at how one selects a test model? Is it just gut feel, IQ, luck, happens to be on my desk? Is there a method of model selection that could be defined and taught? Examined? Why don't we teach people to invent and choose test models? It seems to me that this needs much more attention that anyone gives it today.
What do you think?
Tags: #teaching #hands-ontraining #Euler #certification #model
Paul Gerrard My linkedin profile is here My Mastodon Account