Paul Gerrard

My experiences, opinions in the Test Engineering business. I am republishing/rewriting old blogs from time to time.

First published 04/11/2009

Hi,

With regards to the ATM accreditation -see attached. The cost of getting accredited in the UK is quite low – UKP 300 I believe. ISTQB UK will reuse the accreditation above.

Fran O'Hara is presenting the course this week. Next week I hope to get feedback from him and I'll update the materials to address the mandatory points in the review and add changes as suggested by Fran.

I've had no word from ISTQB on availability of sample papers as yet. I'll ask again.

I have taken the ATA exam and I thought that around one third of the questions were suspicious. That is, I thought the question did not have an answer or the provided answers were ambiguous or wrong. Interestingly, there are no comments from the client on the exam are there?

If their objective is to pass the exam only, then their objective is not the same as the ISTQB scheme. The training course has been reviewed against the ATA Syllabus which explicitly states a set of learning objectives (in fact they are really training objectives, but that's another debate). The exam is currently a poor exam and does not examine the syllabus content well. It certainly is not focused on the same 'objectives' as the syllabus and training material. If the candidates joined the course thinking the only objective was to pass the exam, then they will not pay attention to the content that is the basis of the exam. I would argue that the best way to pass the exam is to attend to the syllabus. The ‘exam technique’ is very simple – and the same as the Foundation exam. A shortage of test questions should not impair their ability to pass the exam. The exam is based on the SYLLABUS. The course is based on the SYLLABUS.

Here's my comments on their points – in RED.

  • The sessions were not oriented to pass the exam. They were general testing lessons� the main objective of the training should be to prepare the assistants for the examination. That is not the intention of the ISTQB scheme. If we offered a course that focused only on passing the exam we would certainly lose our accreditation. Agree that a sample paper is required ((ISTQB to provide). It is extremely hard to prepare course material for the exam without having a sample paper. Although I have taken the exam (and found serious fault with it) I have not got a copy and was not allowed to give feedback. Most of the dedicated time in the training was not usable to pass the exam: The training was more oriented to test management than test analyst, which was the objective. I don’t know if this is true of the material, or the way you presented it. Since the course is meant to be advanced and not basic, the material will be more focused on the tester making choices rather than doing basic exercises. The syllabus dedicates three whole days to test technques – not management specific material. For example: a lot of time dedicated to risk management theory and practice and the specific weigh in the exam for that section was not so high. True. The section on risk based testing is too long and needs cutting down.
  • More exercises needed: the training included some exercises but were similar to the foundation level ones. The training provider must be responsible to find and include advanced exercises. The exercises are similar to the Foundation course exercises because the Foundation syllabus is reused. The difficulty of the ATA exercises is slightly higher. However, because the exam presents multiple choice answers, the best technique for obtaining the correct answer may not be how one tests. This is a failure of the exam not the training material. (Until we get a sample paper, how can we give examples of exam questions?) o Examples of exercises: ? For an specific situation: How many test conditions.. using this test technique??? Not sure I understand. Is the comment, “can we have exercises that ask, how many conditions would be derived using a certain technique?” Easily done – just count the conditions in the answer. ? From our experience the exercises included in the exam were similar to the basic one but more complex. Are they saying the ATA exam was like the Foundation exam – but more difficult? That is to be expected. Perhaps we provide some exercises from Foundation materials but make them a little more involved. There are a small number, but I agree we need to provide a lot more.
  • The training would include more reference to the foundation level Er not sure what this means. Could or should? Are they asking for more content lifted from Foundation scheme to be included in the course? In fact, much of the reusable material is already in the course (it’s much easier to reuse rather than write new!) Not sure what they are asking here.
  • Sample exams needed Agreed!
  • A lot of time dedicated in the sessions to theory than can be just self studied by assistants: i.e. Quality attributes This is possible. Perhaps we could extract content from the syllabus and publish this as a pre-read for the course? There are some Q&A in thehandouts already, but more could be added. However, a LOT of the syllabus could be treated this way.
  • More practical needed for the following modules: o Defect management Isn’t this covered in the Advanced Test Management syllabus? (They want LESS management don’t they?) o Reviews: in the training we covered theory (types, roles�) but not practical questions like the exam�s We don’t know what the review questions in the exam look like. They are unlikely to be ‘practical’.

The general conclusion is that the training should be pass exam oriented. See my comment above. If this is REALLY what they want – they do not need a training course. They should just memorise the syllabus, since that is what the exam is based on. Some of the comments above, I think are legitimate and we need to add/remove/change content in the course. Some of the ATM material could be reused as it is possibly more compact. (Risk, incidents, reviews). Yes we need more sample questions – agreed! But I think some of the comments above betray a false objective. If we taught an exam-oriented course they would pass the exam but not learn much about testing. This is definitely NOT what the ISTQB scheme is about. However, people like Rex Black are cashing in on this. See here: https://store.rbcs-us.com/index.php?option=com_ixxocart&Itemid=6&p=product&id=16&parent=6 What will you suggest to the client re: getting their people through the exams? I hope some of the text above will help. If you do have specific points (other than above) let me know. I will spend time in the next 2-3 weeks updating the materials.

Tags: #ALF

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account

First published 29/01/2010

What are the emerging testing practices that have most promise? What have you tried and works for you? Importantly, what did not work?

Paul considers what “innovative” really means and looks at three emerging approaches: “virtualisation and the cloud”, “behaviour-driven development” and “crowdsourced testing”.

This session attempts to separate the hype from the reality and provide some pointers for the future.

This talk was presented at the Fourth Test Management Summit in London on 27 January 2010.

Registered users can download the presentation from the link below. If you aren't registered, you can register here.

Tags: #innovation

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account

First published 06/11/2009

Low product quality may be associated with poor development activities, but most organisations identify lack of testing or low testing effectiveness as the culprit. The choice is clear: hire more testers, improve the way we do our testing or get someone else to do it.

Hiring more testers might increase the amount of testing, but unless the new testers are particularly capable, the productivity of test teams may be unchanged. On the same product, 'more testing' will only improve test effectiveness if a more systematic approach is adopted and techniques are used to reach the software nooks and crannies that other tests don't reach. Just like search teams, testers must 'spread out', but aimless wandering is not effective. Testers must be organised to avoid duplicating effort and leaving gaps.

If testing is currently chaotic, adding testers to a chaotic process may keep unemployed testers off the street, but doesn't improve test effectiveness much. To make significant gains in effectiveness, both testing skills and infrastructure need to be enhanced. What about tools? Can't they be used to increase the testing? For a chaotic process, tools rarely add value (they often waste testers time). All too often, tools are only used to run the tests that are easy to automate – the very tests that didn't find errors!

What about outsourcing? (Tester body-shopping should not, strictly, be regarded as outsourced testing – that is the 'more testers' route). What is the outsourced testing service? The service definition should detail the responsibilities of the client as well as the outsourcer. For example, outsourcer responsibilities might be for example: documentation of master test plans, test specifications, creation of test scripts and data, test execution, test management, test automation etc. The client responsibility might be: direction on test scope, business risks, business processes, technical or application consultancy, assessment of incident severity, analysis of test results and sign-off.

If the client organisation has a poor track record of directing in-house test teams, however, the outsourcing arrangement is unlikely to benefit the client. Testing may not be faster, as testers may be held up through lack of direction from business or system experts. The testing may lack focus, as testers 'guess' where they should spend their time to best effect. Tests may not address the biggest business risks; cosmetic errors may be found at the expense of leaving serious errors undetected.

Simply giving up on testing and handing it over to a third party will cost more because you have a management overhead, as well as more expensive test teams. The quality of the testing is unlikely to improve – good testers are better at producing test plans and tests, but the content is based on the quality and amount of knowledge transferred from business and technical experts. If your experts are not used to being heavily involved in the test process, tests may be produced faster, but may still be of poor quality.

This does not mean that outsourced testers can never be as effective as internal resources. The point is that unless your organisation is used to using an internal testing service, it is unlikely to get the most out of an outsourced testing service. The inevitable conclusion is that most organisations should improve their test practices before outsourcing. But what are the improvements?

We'd like to introduce the good testing customer (GTC). It sounds like this means making the job of the outsourcer easier, but does it really mean... doing the job for them?

Describing a GTC is easy. They know what they want and can articulate their need to their supplier; they understand the customer/supplier relationship and how to manage it; they know how to discern a good supplier from a bad one. One could define a good customer of any service this way.

The GTC understands the role and importance of testing in the software development and maintenance process. They recognise the purpose and different emphases of development, system and acceptance testing. Their expectations of the test process are realistic and stable. The relationship between business, technical and schedule risks and testing is visible. When development slips, the testing budget is defended; the consequences of squeezing testing are acknowledged.

These are the main issues that need to be addressed by the client organisation if the benefits of good testing are to be realised. How does an organisation become a good testing customer? In the same way any organisation improves its practices – through management and practitioner commitment, clarity of purpose and a willingness to change the way things are done.

© 1998, Paul Gerrard

Tags: #outsourcing #improvement

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account

First published 08/12/2007

With most humble apologies to Rudyard Kipling...

If you can test when tests are always failing, When failing tests are sometimes blamed on you, If you can trace a path when dev teams doubt that, Defend your test and reason with them too.

If you can test and not be tired by testing, Be reviewed, and give reviewers what they seek, Or, being challenged, take their comments kindly, And don't resist, just make the changes that you need.

If you find bugs then make them not your master; If you use tools – but not make tools your aim; If you can meet with marketing and salesmen And treat those two stakeholders just the same.

If you can give bad news with sincerity And not be swayed by words that sound insane Or watch the tests you gave your life to de-scoped, And run all test scripts from step one again.

If you can analyse all the known defects Tell it how it is, be fair and not be crass; And fail, fail, fail, fail, fail repeatedly; And never give up hope for that one pass.

If you explore and use imagination To run those tests on features never done, And keep going when little code is working And try that last test boundary plus one.

If you can talk with managers and users, And work with dev and analysts with ease, If you can make them see your test objective, Then you'll see their risks and priorities.

If you can get your automation working With twelve hours manual testing done in one - Your final test report will make some reading And without doubt – a Tester you'll become!

By the way, to read If... by Rudyard Kipling you can see it here: http://www.kipling.org.uk/poems_if.htm

Tags: #kipling #poetry #if...

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account

First published 05/11/2009

This year, I was asked to present two talks on 'Past, Present and Future of Testing' at IBC Euroforum in Stockholm and 'Future of Testing' at SQC London. I thought it would be a good idea to write some notes on the 'predictions' as I'm joining some esteemed colleagues at a retreat this weekend, and we talk about futures much of the time. These notes are notes. This isn't a formal paper or article. Please regard them as PowerPoint speaker notes, not more. They don't read particularly well and don't tell a story, but they do summarise the ideas presented at the two talks.

Registered users can download the paper from the link below. If you aren't registered, you can register here.

Tags: #futures

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account

First published 06/11/2009

Presentation to SAST on Risk-Based Testing (Powerpoint PPT file) – This talk is an overview of Risk-Based Testing presented to the Swedish Association of Software Testing (SAST). Why do Risk-Based Testing?, Introduction to Risk, Risks and Test Objectives, Designing the Test Process, Project Intelligence, Test Strategy and Reporitng.

Risk – The New Language of E-business Testing. This talk expands the theme of risk-Based Testing introduced below. It focuses on E-Business and presents more detail on risk-Based Test Planning and Reporting. It has been presented to the BCS SIGIST in London and is the opening keynote for EuroSTAR 2000.

Risk-Based Testing – longer introduction. This talk present a summary of what risk-based testing is about. It introduces risk as the new language of testing and discusses the four big questions of testing: How much testing is enough? When should we stop testing? When is the product good enough? How good is our testing? Metrics (or at least counting bugs) doesn't give us the answer. The risk-based approach to testing can perhaps help us answer these questions, but it demands that we look at testing from an different point of view. A polemic.

Registered users can download the paper from the link below. If you aren't registered, you can register here.

Tags: #risk-basedtesting

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account

First published 08/01/2010

Sarang Kulkarni posted an interesting question on the LinkeIn “senior Testing Professionals” discussion forum.

It's a question that has been posed endlessly by people funding testing and every tester has worried about the answer. I can't tell you how many discussions I've been involved in have revolved around this question. It's a fair question and it's hard one to answer. OK – why is it so hard to answer? Received wisdom has nothing to say except quantiy of testing is good (in some way) and that thoroughness (by some mysterious measure) are morelikely to improve quality. Unfortunately, testers do no usually write or change software – only developers have an influence over quality. All in all the quality of testing has the most indirect relationship to quality, Measure perfomance? Forget it.

My response is based on a different view of what testing is for. Testing isn't about finding bugs so others can fix them. That's like saying literary criticsm is about finding typos, or battlefield medicine is about finding bulletholes in people or banking is about counting money. Not quite.

Testing exists to collect information about a system's behaviour (on the analysts drawing board, as components, usable system or integrated whole) and calibrating that in some (usually subjective) way against someone else's expectations and communicating that to stakeholders. It's as simple and as complicated than that.

Simple because testing exists to collect and communicate information for others to make a decision. more complicated because virtually everything in software, systems, organisation and culture block this most basic objective. but hey, that's what makes a tester's life interesting.

If our role as testers is collect and disseminate information for others to make decisions, then it must be those decision makers who must just the completeness and quality of our work – i.e. performance. Who else can make that judgement – and judgement it must be because there are no metrics that can reasonably be used to evaluate our performance.

The problem is, our 'performance' is influenced by the quality (good or bad) of the systems we test, the ease by which we can obtain behavioural information, the subjective view of the depth of the testing we do, the cricitality of the systems we test and the pressures on, mentality of and even frame of mind of the people we test on behalf of.

What meaning could be assigned to any meausre once cares to use? Silly.

Performance shamformance. What's the difference?

The best we can do is ask our stakeholders – the people we test on behalf of – what do they thing we are doing and how well are we doing it? Subjective yes. Qualititative yes. Helpful – yes. If...

The challenge to testers is to get stakeholders to articulate what exactly they want from us before we test and then to give us their best assessment of how we meet those objectives. Anything else is mere fluff.

Tags: #ALF

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account

First published 03/12/2009

You may or may not find this response useful. :–)

“It depends”.

The “it depends” response is an old joke. I think I was advised by David Gelperin in the early 90s that if someone says “it depends” your response should be “ahh, you must be a consultant!”

But it does depend. It always has and will do. The context-driven guys provide a little more information – “it depends on context”. But this doesn't answer the question of course – we still get asked by people who really do need an answer – i.e. project managers who need to plan and to resource teams.

As an aside, there’s an interesting discussion of “stupid questions” here. This question isn't stupid, but the blog post is interesting.

In what follows – let me assume you’ve been asked the question by a project manager.

The 'best' dev/tester ratio is possibly the most context-specific question in testing. What are the influences on the answer?

  • What is the capability/competence of the developers and testers respectively and absolutely?
  • What do dev and test WANT to do versus what you (as a manager) want them to do?
  • To what degree are the testers involved in early testing (they just system test? Or are involved from concept thru to acceptance etc.)
  • What is the risk-profile of the project?
  • Do stakeholders care if the system works or not?
  • What is the scale of the development?
  • What is the ratio of new/custom code versus reused (and trusted) code/infrastructure?
  • How trustworthy is the to-be-reused code anyway?
  • How testable will the delivered system be?
  • Do resources come in integer whole numbers or fractions?
  • And so on, and so on…
Even if you had the answers to these questions to six significant digits – you still aren’t much wiser because some other pieces of information are missing. These are possibly known to the project manager who is asking the question:
  • How much budget is available? (knowing this – he has an answer already)
  • Does the project manager trust your estimates and recommendations or does he want references to industry ‘standards’? i.e. he wants a crutch, not an answer.
  • Is the project manager competent and honest?
So we’re left with this awkward situation. Are you being asked the question to make the project manager feel better; to give him reassurance he has the right answer already? Does he know his budget is low and needs to articulate a case for justifying more? Does he think the budget is too high and wants a case for spending less?

Does he regard you as competent and trust what you say anyway? This final point could depend on his competence as much as yours! References to ‘higher authorities’ satisfy some people (if all they want is back-covering), but other folk want personal, direct, relevant experience and data.

I think a bit of von Neumann game theory may be required to analyse the situation!

Here’s a suggestion. Suppose the PM says he has 4 developers and needs to know how many testers are required. I’d suggest he has a choice:

  • 4 dev – 1 tester: onus is on the devs to do good testing, the tester will advise, cherry pick areas to test and focus on high impact problems. PM needs to micro manage the devs, and the tester is a free-agent.
  • 4 dev – 2 testers: testers partner with dev to ‘keep them honest’. Testers pair up to help with dev testing (whether TDD or not). Testers keep track of the coverage and focus on covering gaps and doing system-level testing. PM manages dev based on tester output.
  • 4 dev – 3 testers: testers accountable for testing. Testers shadow developers in all dev test activities. System testing is thorough. Testers set targets for achievement and provide evidence of it to PM. PM manages on the basis of test reports.
  • 4 dev – 4 testers: testers take ownership of all testing. But is this still Agile??? ;–)
Perhaps it’s worth asking the PM for dev and tester job specs and working out what proportion of their activities are actually dev and test? Don’t hire testers at all – just hire good developers (i.e. those who can test). If he has poor developers (who can’t/won’t test) then the ratio of testers goes up because someone has to do their job for them.

Tags: #estimation #testerdeveloperratio

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account

First published 01/05/2007

At last week's Test Management Forum, Susan Windsor introduced a lively session on estimation – from the top down. All good stuff. But during the discussion, I was reminded of a funny story (well I thought it was funny at the time).

Maybe twenty years ago (my memory isn’t as good as it used to be), I was working at a telecoms company as a development team leader. Around 7pm one evening, I was sat opposite my old friend Hugh. The office was quiet, we were the only people still there. He was tidying up some documentation, I was trying to get some stubborn bug fixed (I’m guessing here). Anyway. Along came the IT director. He was going home and he paused at our desks to say hello, how’s it going etc.

Hugh gave him a brief review of progress and said in closing, “we go live a week on Friday – two weeks early”. Our IT director was pleased but then highly perplexed. His response was, “this project is seriously ahead of schedule”. Off he went scratching his head. As the lift doors closed, Hugh and I burst out laughing. This situation had never arisen before. What a problem to dump on him! How would he deal with this challenge? What could he possibly tell the business? It could be the end of his career! Delivering early? Unheard of!

It’s a true story, honestly. But what it also reminded me of was that if estimation is an approximate process, our errors in estimation in the long run (over or under estimation) expressed as a percentage under or over, should balance statistically around a mean value of zero, and that mean would represent the average actual time or cost it took for our projects to deliver.

Statistically, if we are dealing with a project that is delayed (or advanced!) by unpredictable, unplanned events, we should be overestimating as much as we under estimate, shouldn’t we? But clearly this isn’t the case. Overestimation, and delivering early is a situation so rare, it’s almost unheard of. Why is this? Here's a stab at a few reasons why we consistently 'underestimate'.

First, (and possibly foremost) is we don't underestiate at all. Our estimates are reasonably accurate, but consistently we get squeezed to fit with pre-defined timescales or budgets. We ask for six people for eight weeks, but we get four people for four weeks. How does this happen? If we've been honest in our estimates, surely we should negotiate a scope reduction if our bid for resources or time is rejected? Whether we descope a selection of tests or not, when the time comes to deliver, our testing is unfinished. Of course, go live is a bumpy period – production is where the remaining bugs are encountered and fixed in a desperate phase of recovery. To achieve a reasonable level of stability takes as long as we predicted. We just delivered too early.

Secondly, we are forced to estimate optimistically. Breakthroughs, which are few and far between are assumed to be certainties. Of course, the last project, which was so troublesome, was an anomaly and it will always be better next time. Of course, this is nonsense. One definition of madness is to expect a different outcome from the same situation and inputs.

Thirdly, our estimates are irrelevant. Unless the project can deliver in some mysterious predetermined time and cost contraints, it won't happen at all. Where the vested interests of individuals dominate, it could conceivably be better for a supplier to overcommit, and live with a loss-making, troublesome post-go live situation. In the same vein, the customer may actually decide to proceed with a no-hoper project because certain individuals' reputation, credibility and perhaps jobs depend on the go live dates. Remarkable as it may seem, individuals within customer and supplier companies may actually collude to stage a doomed project that doesn't benefit the customer and loses the supplier money. Just call me cynical.

Assuming project teams aren't actually incompetent, it's reasonable to assume that project execution is never 'wrong' – execution just takes as long as it takes. There are only errors in estimation. Unfortunately, estimators are suppressed, overruled, pressured into aligning their activities with imposed budgets and timescales, and they appear to have been wrong.

Tags: #estimation

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account

First published 03/05/2006

I coach rowing, so I'll use this as an analogy. Consider the crew of rowers in a racing eight. The coach’s intention is to get all eight athletes rowing in harmony, with the same movement with balance, poise and control. In theory, if everyone does the same thing, the boat will move smoothly, and everyone can apply the power of their legs, trunk and arms to moving the boat as quickly as possible (and win races). Of course, one could just show the crew a video of some Olympic champions and say, 'do what they do', 'exactly', 'now'. But how dumb is that? Each person is an individual, having different physical shape and size, physiology, ambition, personality, attitudes and skill levels. Each athlete has to be coached individually to bring them up to the 'gold standard'. But it's harder than that, too. It's not as if each athlete responds to the same coaching messages. The coach has to find the right message to get the right response from each individual. For example, to get rowers to protect their lower backs, they must 'sit up' in the boat. Some rowers respond to 'sit up' others to 'keep your head high', 'be arrogant' and so on. That's just the way it is with people.

In the same way, when we want people to adopt a new way of working – a new 'process', we have to recognise that to get the required level of process adherence and consistency, (i.e. changed behaviours) every individual faces a different set of challenges. For each individual, it's a personal challenge. To get each individual to overcome their innate resistance to change, improve their skill levels, adjust their attitudes, and overall, change their behaviour, we have to recognise that each individual needs individual coaching, encouragement and support.

Typical 'process' improvement attempts start with refined processes, some training, a bit of practice, a pilot, then a roll-out. But where is the personal support in all this? To ask a group of individuals to adopt a new process (any process) by showing them the process and saying 'do it', is like asking a village footbal team to 'play like Brazil'.

Tags: #Rowing #TesterDevelopment #TestProcessImprovement

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account