Paul Gerrard

My experiences in the Test Engineering business; opinions, definitions and occasional polemics. Many have been rewritten to restore their original content.

First published 04/11/2009

Hi,

With regards to the ATM accreditation -see attached. The cost of getting accredited in the UK is quite low – UKP 300 I believe. ISTQB UK will reuse the accreditation above.

Fran O'Hara is presenting the course this week. Next week I hope to get feedback from him and I'll update the materials to address the mandatory points in the review and add changes as suggested by Fran.

I've had no word from ISTQB on availability of sample papers as yet. I'll ask again.

I have taken the ATA exam and I thought that around one third of the questions were suspicious. That is, I thought the question did not have an answer or the provided answers were ambiguous or wrong. Interestingly, there are no comments from the client on the exam are there?

If their objective is to pass the exam only, then their objective is not the same as the ISTQB scheme. The training course has been reviewed against the ATA Syllabus which explicitly states a set of learning objectives (in fact they are really training objectives, but that's another debate). The exam is currently a poor exam and does not examine the syllabus content well. It certainly is not focused on the same 'objectives' as the syllabus and training material. If the candidates joined the course thinking the only objective was to pass the exam, then they will not pay attention to the content that is the basis of the exam. I would argue that the best way to pass the exam is to attend to the syllabus. The ‘exam technique’ is very simple – and the same as the Foundation exam. A shortage of test questions should not impair their ability to pass the exam. The exam is based on the SYLLABUS. The course is based on the SYLLABUS.

Here's my comments on their points – in RED.

  • The sessions were not oriented to pass the exam. They were general testing lessons� the main objective of the training should be to prepare the assistants for the examination. That is not the intention of the ISTQB scheme. If we offered a course that focused only on passing the exam we would certainly lose our accreditation. Agree that a sample paper is required ((ISTQB to provide). It is extremely hard to prepare course material for the exam without having a sample paper. Although I have taken the exam (and found serious fault with it) I have not got a copy and was not allowed to give feedback. Most of the dedicated time in the training was not usable to pass the exam: The training was more oriented to test management than test analyst, which was the objective. I don’t know if this is true of the material, or the way you presented it. Since the course is meant to be advanced and not basic, the material will be more focused on the tester making choices rather than doing basic exercises. The syllabus dedicates three whole days to test technques – not management specific material. For example: a lot of time dedicated to risk management theory and practice and the specific weigh in the exam for that section was not so high. True. The section on risk based testing is too long and needs cutting down.
  • More exercises needed: the training included some exercises but were similar to the foundation level ones. The training provider must be responsible to find and include advanced exercises. The exercises are similar to the Foundation course exercises because the Foundation syllabus is reused. The difficulty of the ATA exercises is slightly higher. However, because the exam presents multiple choice answers, the best technique for obtaining the correct answer may not be how one tests. This is a failure of the exam not the training material. (Until we get a sample paper, how can we give examples of exam questions?) o Examples of exercises: ? For an specific situation: How many test conditions.. using this test technique??? Not sure I understand. Is the comment, “can we have exercises that ask, how many conditions would be derived using a certain technique?” Easily done – just count the conditions in the answer. ? From our experience the exercises included in the exam were similar to the basic one but more complex. Are they saying the ATA exam was like the Foundation exam – but more difficult? That is to be expected. Perhaps we provide some exercises from Foundation materials but make them a little more involved. There are a small number, but I agree we need to provide a lot more.
  • The training would include more reference to the foundation level Er not sure what this means. Could or should? Are they asking for more content lifted from Foundation scheme to be included in the course? In fact, much of the reusable material is already in the course (it’s much easier to reuse rather than write new!) Not sure what they are asking here.
  • Sample exams needed Agreed!
  • A lot of time dedicated in the sessions to theory than can be just self studied by assistants: i.e. Quality attributes This is possible. Perhaps we could extract content from the syllabus and publish this as a pre-read for the course? There are some Q&A in thehandouts already, but more could be added. However, a LOT of the syllabus could be treated this way.
  • More practical needed for the following modules: o Defect management Isn’t this covered in the Advanced Test Management syllabus? (They want LESS management don’t they?) o Reviews: in the training we covered theory (types, roles�) but not practical questions like the exam�s We don’t know what the review questions in the exam look like. They are unlikely to be ‘practical’.

The general conclusion is that the training should be pass exam oriented. See my comment above. If this is REALLY what they want – they do not need a training course. They should just memorise the syllabus, since that is what the exam is based on. Some of the comments above, I think are legitimate and we need to add/remove/change content in the course. Some of the ATM material could be reused as it is possibly more compact. (Risk, incidents, reviews). Yes we need more sample questions – agreed! But I think some of the comments above betray a false objective. If we taught an exam-oriented course they would pass the exam but not learn much about testing. This is definitely NOT what the ISTQB scheme is about. However, people like Rex Black are cashing in on this. See here: https://store.rbcs-us.com/index.php?option=com_ixxocart&Itemid=6&p=product&id=16&parent=6 What will you suggest to the client re: getting their people through the exams? I hope some of the text above will help. If you do have specific points (other than above) let me know. I will spend time in the next 2-3 weeks updating the materials.

Tags: #ALF

Paul Gerrard My linkedin profile is here My Mastodon Account

First published 28/04/2006

I've been asked to present the closing keynote at this year's Eurostar Conference in Manchester on December 7th. Here's the abstract: When it comes to improving the capabilities of our testers, if you believe the training providers brochures, you might think that a few days training in a classroom is enough to give a tester all the skills required to succeed. But it is obvious that to achieve mastery, it can take months or years to acquire the full range of technical and inter-personal skills required. Based on my experience as a rowing coach, this keynote describes how an athletic training programme is run and compares that with the way most testers are developed. An athlete will have a different training regime for the different periods of the year and coaching, mentoring, inspiring and testing are all key activities of the coach. Training consists of technical drills, strength, speed, endurance and team work. Of course a tester must spend most of their time actually doing their job, but there are many opportunities for evaluation and training to occur even in a busy schedule. Developing tester capability requires a methodical, humane approach with realistic goals, focused training, regular evaluation, feedback and coaching as well as on-the-job experience. You can see the presentation here: multi-page HTML file | Powerpoint Slide Show

 

I originally created this presentation for the BCS SIGIST meeting on the Ides of March.



Tags: #developingtesters #athletes #Rowing #TesterDevelopment

Paul Gerrard My linkedin profile is here My Mastodon Account

First published 29/01/2010

Last week I presented a talk called “Advancing Testing Using Axioms” at the First IIR Testing Forum in Helsinki, Finland.

Test Axioms have been formulated as a context-neutral set of rules for testing systems. Because they represent the critical thinking processes required to test any system, there are clear opportunities to advance the practice of testing using them.

The talk introduces “The First Equation of Testing” and discusses opportunities to use the Axioms to support test strategy development, test assessment and improvement and suggests that a tester skills framework could be an interesting by-product of the Axioms. Finally, “The Quantum Theory of Testing” is introduced.

Go to the web page for this talk.

Tags: #testaxioms #futures #advancingtesting

Paul Gerrard My linkedin profile is here My Mastodon Account

First published 06/11/2009

Tools for Testing Web Based Systems

Selecting and Implementing a CAST Tool

Tools Selection and Implementation (PDF) What can test execution tools do for you? The main stages of tool selection and implementation, and a warning: Success may not mean what you think!

Registered users can downaload the paper from the link below. If you aren't registered, you can register here.

Tags: #testautomation #cast

Paul Gerrard My linkedin profile is here My Mastodon Account

First published 05/10/2010

The first four articles in this series have set out the main approaches to combating regression in changing software systems. From a business and technical viewpoint, we have considered both pre-change regression prevention (impact analysis) and post-change regression detection (regression testing). In this final article of the series, we’ll consider three emerging approaches that promise to reduce the regression threat and present some considerations of an effective anti-regression strategy with a recap of the main messages of the article series.

Three Approaches: Test, Behaviour and Acceptance Test-Driven Development

There is an increasing amount of discussion on development approaches based on the test-driven model. Ten years or so ago, before lightweight (later named Agile) approaches became widely publicized, test-driven development (TDD) was rare. Some TTD happened, but mostly in high integrity environments where component development and testing was driven by the need to meet formal functional and structural test coverage targets.

Over the course of the last ten years however, the notion of developers creating automated tests typically based on stories and discussions with on-site customers is becoming more common. The leaders in the Agile community are tending to preach behaviour- (BDD) and even acceptance test-driven development (ATDD) to improve and make accessible the test assets in Agile projects. They are also an attempt to move the Agile emphasis from coding to delivery of stakeholder value.

The advocates of these approaches (see for example testdriven.net, gojko.net, behaviour-driven.org, ATDD in Practice) would say that the approaches are different and of course, in some respects they are. But from the point of view of our discussion of anti-regression approaches, the relevance is this:

  1. Regression testing performed by developers is probably the most efficient way to demonstrate functional equivalence of software (given the limited scope of unit testing).
  2. The test-driven paradigm ensures that regression test assets are acquired and maintained in synchrony with the code – so are accurate and constantly reusable.
  3. The existence of a set of trusted regression tests means that the programmer is protected (to some degree) from introducing regressions when they change code (to enhance, fix bugs in or refactor code).
  4. Programmers, once they commit to the test-first approach tend to find their design and coding activities more predictable and less stressful.
These approaches obviously increase the effort at the front-end and many programmers are not adopting (and may never adopt) them. However, the trend toward test-first does seem to be gaining momentum.

A natural extension of test-first in Agile and potentially more structured environments is the notion of live specifications. In this approach, the automated tests become the independent and executable definition of the behavior of the system. The tests define the behavior of a system by example, and can be considered to be executable specifications (of a sort). Of course, examples alone cannot define the behavior of systems completely and some level of logical specification will always be required. However, the live-specification approach holds great promise, particularly as way of reducing regressions.

The ideal seems to be that where a change is required by users, the live specification is changed, new tests added and existing tests changed or retired as required. The software changes are made in parallel. The new and changed tests are run to demonstrate the changes work as required, and the existing (unchanged) tests are, by definition, the regression test pack. The format, content and structure of such live-specifications are evolving and a small number of organisations claim some successes. It will be interesting to see examples of the approach in action.

Unified Requirements and Systems Testing

The test-first approaches discussed above are gaining popularity in Agile environments. But what can be done in structured, waterfall, larger developments?

Some years ago (in my first Eurostar paper in 1993), I proposed a ‘Unified Approach to System Functional Testing’. In that paper, I suggested that a tabular notation for capturing examples or test cases could be used to create crude prototypes, review check lists and structured walkthroughs of requirements. These ‘behaviours’ as I called them could be used to test requirements documents, but also reused as the basis of both system and acceptance testing later on. Other interests took priority and I didn’t take this proposal much further until recently.

Several developments in the industry make me believe that a practical implementation of this unified approach is now possible and attractive to practitioners. See for example the model-based papers here: www.geocities.com/modelbasedtesting/online_papers.htm or the tool described here: teststories.info. To date, these approaches have focused on high formality and embedded/industrial applications.

Our approach involves the following activities:

  1. Requirements are tabulated to allow cross-referencing.
  2. Requirements are analysed and stories, comprising feature descriptions and a covering set of scenarios and examples (acceptance criteria) are created
  3. The scenarios are mapped to paths though the business process and a data dictionary; paper and automated prototypes can be generated from the scenarios
  4. Using scenario walkthroughs, the requirements are evaluated, omissions and ambiguities identified and fixed.
  5. The process paths, scenarios and examples may be incorporated into software development contracts, if required.
  6. The process paths, scenarios and examples are re-used as the basis of the acceptance test which is conducted in the familiar way.
Essentially, the requirements are ‘exampled’, with features identified and a set of acceptance criteria defined for each – in a structured language. It is the structure of the scenarios that allows tabular definitions of tests for use in manual procedures as well as skeletal automated tests to be generated automatically. There are several benefits deriving from this approach, but the two that concern us here are:
  • The definition of tests and the ability to generate automated scripts occurs before code is written which means that the test-first approach is viable for all projects, not just Agile.
  • The database of requirements, processes, process paths, features, examples and data dictionary are cross-referenced. The database can be used to support more detailed business-oriented impact analysis.
The first benefit has been discussed in the previous section. The second has great potential.

The business knowledge captured in the process will allow some very interesting what-if questions to be asked and answered. If a business process is to change, the system features, requirements, scenarios and tests affected can be traced. If a system feature is to be changed, the scenarios, tests, requirement and business process affected can be traced. This knowledge should provide at least at a high level, a better understanding of the impact of change. Further, it promotes the notion of live specifications and Trusted Requirements.

There is a real possibility that the (typically) huge investment in requirements capture will not be wasted and the requirements may be accurately maintained in parallel with a covering set of scenarios. Further, the business knowledge captured in the requirements and the database can be retained for the lifetime of the system in question.

Improving Software Analysis Tools

The key barrier to performing better technical impact analyses is the lack (and expense) of appropriate tools to provide a range of source code analyses. Tools that provide visualisations of the architecture, relationships between components and hierarchical views of these relationships are emerging. Some obvious challenges make life somewhat difficult though:
  1. Tools are usually language dependent so mixed environments are troublesome.
  2. The source code for third-party components used in your system may not be available.
  3. Visualisation software is available, but for real-size systems, the graphical models can become huge and unworkable.
These tools are obviously focused at architects, designers and developers and are naturally technical in nature.

An example of how tools are evolving in this area is Structure101g by (headwaysoftware.com). This tool can perform detailed structural analyses of several languages (“java, C/C++ and anything”) but can, in principle provide visualisations, navigation and query facilities for any structural model. For example, with the necessary plugins, the tool can provide insights into XML/XSLT libraries and web site maps at varying levels of abstraction.

As tools like this become better established and more affordable, they will surely become ‘must-haves’ for architects and developers in large systems environments.

Anti-Regression Strategy – Making it Happen

We’ll close this article series with some guidelines summarised from this and previous articles. Numerals in brackets refer to the article number.
  1. Regressions in working software affect business users, technical support, testers, developers, software and project management and stakeholders. It is everyone’s problem (I, V).
  2. A comprehensive anti-regression strategy would include both regression prevention and detection techniques from a technical and business viewpoint. (I, II).
  3. Impact analysis can be performed from both a business and technical viewpoints. (all)
  4. Technical impact analysis really needs tool support; consider open source, proprietary (or consider building your own to meet your specific objectives).
  5. Regression testing may be your main defence against regression, but should never be the only one; impact analysis prevents regression and informs good regression testing. (I, II, IV).
  6. Regression testing can typically be performed at the component, system or business level. These test levels have different objectives, owners and may be automated to different degrees (III).
  7. Regression tests may be created in a test-driven regime, or as part of requirements or design based approaches. Reuse of tests saves time, but check that these tests actually meet your anti-regression objectives (III).
  8. Regression tests become less effective over time; review your test pack regularly, especially when you are about to add to it. (This could be daily in an Agile environment!) (III)
  9. Analyses of production data will tell you the format, volumes and patterns of data that is most common – use it as a source of test data and a model for coverage; but don’t forget to include the negative tests too though! (III)
  10. If you need to be selective in the tests you retain and execute then you’ll need an agreed process, forum, decision-maker or makers or criteria for selection (agreed with all stakeholders in 1 above) (III).
  11. Most regression testing can and should be automated. Understand your context (objectives, test levels, risk areas, developer/tester motivations and capabilities etc.) before defining your automation strategy (III, IV).
  12. Consider what test levels, system stimulation and outcome detection methods, ownership, capabilities and tool usability are required before defining an automation regime (IV).
  13. Creating an automation regime retrospectively is difficult and expensive; test-first approaches build regression testing into the DNA of project teams (V).
  14. There is a lot of thinking, activity and new approaches/tools being developed to support requirements testing, exampling, live-specs and test automation; take a look (V).
I wish you the best of luck in your anti-regression initiatives.

I’d like to express sincere thanks to the Eurostar Team for asking me to write this article series and attendees at the Test Management Summit for inspiring it.

Paul Gerrard 23 August 2010.

Tags: #regressiontesting #anti-regression

Paul Gerrard My linkedin profile is here My Mastodon Account

First published 06/11/2009

This paper, written by Paul Gerrard was presented to the EuroSTAR Conference in London, November 1995 and won the prize for 'Best Presentation'.

Client/Server (C/S) technology is being taken up at an incredible rate. Almost every development organisation has incorporated C/S as part of their IT strategy. It appears that C/S will be the dominant architecture taking IT into the next millennium. Although C/S technology is gaining acceptance rapidly and development organisations get better at building such systems, performance issues remain as an outstanding risk when a system meets its functional requirements.

This paper sets out the reasons why system performance is a risk to the success of C/S projects. A process has been outlined which the authors have used to plan, prepare and execute automated performance tests. The principles involved in organising a performance test have been set out and an overview of the tools and techniques that can be used for testing two and three-tier C/S systems presented.

In planning, preparing and executing performance tests, there are several aspects of the task which can cause difficulties. The problems that are encountered most often relate to the stability of the software and the test environment. Unfortunately, testers are often required to work with a shared environment with software that is imperfect or unfinished. These issues are discussed and some practical guidelines are proposed.

Download the paper from here.



Tags: #performancetesting #client/server

Paul Gerrard My linkedin profile is here My Mastodon Account

First published 13/07/2011

It seems to me that, to date, perceptions of exploration in communities that don't practice it have always been that it is appropriate only for document- and planning-free contexts. It's not been helped by the emphasis that is placed on these contexts by the folk who practice and advocate exploration. Needless to say, the certification schemes have made the same assumption and promote the same misconception.

But I'm sure that things will change soon. Agile is mainstream and a generation of developers, product owners and testers who might have known no other approach are influencing at a more senior level. Story-based approaches to define requirements or to test existing requirements 'by example' and drive acceptance (as well as being a source of tests for developers) are gaining followers steadily. The whole notion of story telling/writing and exampling is to ask and answer 'what if' questions of requirements, of systems, of thinking.

There's always been an appetite for less test documentation but rarely the trust – at least in testers (and managers) brought up in process and standards-based environments. I think the structured story formats that are gaining popularity in Agile environments are attracting stakeholders, users, business analysts, developers – and testers. Stories are not new – users normally tell stories to communicate their initial views on requirements to business analysts. Business analysts have always known the value of stories in eliciting, confirming and challenging the thinking around requirements.

The 'just-enough' formality of business stories provides an ideal communication medium between users/business analysts and testers. Business analysts and users understand stories in business language. The structure of scenarios (given/when/then etc.) maps directly to the (pre-conditions/steps/post-conditions) view of a formal test case. But this format also provides a concise way of capturing exploratory tests.

The conciseness and universality of stories might provide the 'just enough' documentation that allows folk who need documentation to explore with confidence.

I'll introduce some ideas for exploratory test capture in my next post.

Tags: #exploratorytesting #businessstories #BusinessStoryManager

Paul Gerrard My linkedin profile is here My Mastodon Account

First published 28/05/2007

As a matter of record, I wanted to post a note on my involvement with the testing certification scheme best known in the UK (and many other countries) as the ISEB Testing Certificate Scheme. I want to post some other messages commenting on the ISEB, ISTQB and perhaps other schemes too, so a bit of background might be useful.

In 1997, a small group of people in the UK started to discuss the possibility of establishing a testing certification scheme. At that time, Dorothy Graham and I were probably the most prominent. There was some interest in the US too, I recall, and I briefly set up a page on the Evolutif website promoting the idea of a joint European/US scheme, and asking for expressions of interest in starting a group to formulate a structure, a syllabus, an examination and so on. Not very much came of that, but Dot and I in particular, drafted an outline syllabus which was just a list of topics, about a page long.

The Europe/US collaboration didn't seem to be going anywhere so we decided to start it in the UK only to begin with. At the same time, we had been talking to people at ISEB who seemed interested in administering the certification scheme itself. At that time ISEB was a certifying organisation having charitable status, independent of the British Computer Society (BCS). That year, ISEB decided to merge into the BCS. ISEB still had it's own identity and brand, but was a subsidiary of BCS from then on.

ISEB, having experience of running several schemes for several years (whereas we had no experience at all) suggested we form a certification 'board' with a chair, terms of reference and constitution. The first meeting of the new board took place on 14th January 1998. I became the first Chair of the board. I still have the Terms of Reference for the board, dated 17 May 1998. Here are the objectives of the scheme and the board extracted from that document:

Objectives of the Qualification • To gain recognition for testing as an essential and professional software engineering specialisation by industry. • Through the BCS Professional Development Scheme and the Industry Structure Model, provide a standard framework for the development of testers' careers. • To enable professionally qualified testers to be recognised by employers, customers and peers, and raise the profile of testers. • To promote consistent and good testing practice within all software engineering disciplines. • To identify testing topics that are relevant and of value to industry • To enable software suppliers to hire certified testers and thereby gain commercial advantage over their competitors by advertising their tester recruitment policy. • To provide an opportunity for testers or those with an interest in testing to acquire an industry recognised qualification in the subject.

Objectives of the Certification Board The Certification Board aims to deliver a syllabus and administrative infrastructure for a qualification in software testing which is useful and commercially viable. • To be useful it must be sufficiently relevant, practical, thorough and quality-oriented so it will be recognised by IT employers (whether in-house developers or commercial software suppliers) to differentiate amongst prospective and current staff; it will then be viewed as an essential qualification to attain by those staff. • To be commercially viable it must be brought to the attention of all of its potential customers and must seem to them to represent good value for money at a price that meets ISEB's financial objectives.

The Syllabus evolved and was agreed by the summer. The first course and examination took place on 20-22 October 1998, and the successful candidates were formally awarded their certificates at the December 1998 SIGIST meeting in London. In the same month, I resigned as Chair but remained on the board. I subsequently submitted my own training materials for accreditation.

Since the scheme started, over 36,000 Foundation examinations have been taken with a pass rate of about 90%. Since 2002 more than 2,500 Practitioner exams have been taken, with a relatively modest pass rate of approximately 60%.

The International Software Testing Qualificaton Board (ISTQB) was established in 2002. This group aims to establish a truly international scheme and now has regional boards in 33 countries. ISEB have used the ISTQB Foundation syllabus since 2004, but continue to use their own Practitioner syllabus. ISTQB are developing a new Practitioner level syllabus to be launched soon, but ISEB have already publicised their intention to launch their own Practitioner syllabus too. It's not clear yet what the current ISEB accredited training providers will do with TWO schemes. It isn't obvious what the market will think of two schemes either.

Interesting times lie ahead.

Tags: #istqb #iseb #TesterCertification

Paul Gerrard My linkedin profile is here My Mastodon Account

First published 05/11/2010

I attended the Unicom “Next Generation Testing Conference” this week and there was lots of good discussion on the difference between Agile and Waterfall.

Afterwards (isn’t it always afterwards!) I thought of something I wished I’d raised at the time, so I thought I’d write a blog by way of follow up. Now I don’t do blogs, although I’ve been meaning to for ages. So this is a new, and hopefully pleasant, experience for me.

I’ve always been concerned about how to contract with a supplier in an agile way. Supplier management is a specific area of interest for me. I’ve listened to many presentations and case studies on this, but frankly haven’t been convinced yet. However, I’ve had one of those light bulb moments. We spend so much time talking about how to improve the delivery and predictability of systems and yet our industry has a bunch of suppliers whose business depends upon us not getting requirements right!

This isn’t their fault though as the purchasing process most companies go through encourages and rewards this behaviour. In a competitive bidding process where price is sensitive, all bidders are seeking to give a keen price for their interpretation of the requirements, knowing that requirements are typically inconsistent and complete. This allows them to bid low, maybe at cost or even lower, and then make their profit on the re-work. I’d always thought re-work was around 30% but Tom Gilb gave figures that show it can be much higher than that, so that’s a good profit incentive isn’t it.

So, here we all are, seeking to reduce spend putting pressure on daily rates, etc. We’re looking at the wrong dimension here! There is a much bigger prize!

Can we reduce the cost of purchasing, reduce the cost of re-work and meet our goals of predictable system delivery. Now, (and some of you will be thinking finally!) I’m convinced agile can help achieve this, but is any customer out there enlightened enough to realise this? Or is it more important to them to maintaining the status quo and avoid change?

Tags: #agile #contracts #suppliers #test

Paul Gerrard My linkedin profile is here My Mastodon Account

First published 29/06/2011

All testing is exploratory. There are quite a few definitions of Exploratory Testing, but the easiest to work with is the definition on Cem Kaner's site.

“Exploratory software testing is a style of software testing that emphasizes the personal freedom and responsibility of the individual tester to continually optimize the value of her work by treating test-related learning, test design, test execution, and test result interpretation as mutually supportive activities that run in parallel throughout the project.”

The usual assumption is that this style of testing applies to software that exists and where the knowledge of software behaviour is primarily to be gathered from exploring the software itself.

But I'd like to posit that if one takes the view:

  • All the artefacts of a project are subjected to testing
  • Testers test systems, not just software in isolation
  • The learning process is a group activity that includes users, analysts, process and software designers, developers, implementers, operations staff, system administrators, testers, users, trainers, stakeholders and the senior user most if not all of who need to test, interpret and make decisions
  • All have their own objectives and learning challenges and use exploration to overcome them.
... then all of the activities from requirements elicitation onwards use testing and exploration.

Exploration wasn't invented by the Romans, but the word explorare is Latin. It's hard to see how the human race could populate the entire planet without doing a little exploration. The writings of Plato and Socrates are documented explorations of ideas.

Exploration is in many ways like play, but requires a perspicacious thought process. Testing is primarily driven by the tester's ability to create appropriate and useful test models. An individual may hold all the knowledge necessary to test in their heads whilst collecting, absorbing and interpreting information from a variety of sources including the system under test. Teams may operate in a similar way, but often need coordination and control and are accountable to stakeholders who need planning, script and/or test log documentation. Whatever. At some point before, during and/or after the 'test' they take guidance from their stakeholders and feedback the information they gather and adjust their activity where necessary.

Testing requires an interaction between the tester, their sources of knowledge and the object(s) under test. The source of knowledge may be people, documents or the system under test. The source of knowledge provides insights as to what to test and provides our oracle of expectations. The “exploration” is mostly of the source of knowledge. The execution of tests and consideration of outcomes confirms our beliefs – or not.

The real action takes place in the head of the tester. Consider the point where a tester reflects on what they have just learned. They may replay events in their mind and the “what just happened?” question triggers one of those aha! moments. Something isn't quite right. So they retrace their steps, reproduce the experience, look at variations in the path of thinking and posit challenges to what they test. They question and repeatedly ask, “what if?”

Of course, the scenario above could apply to testing some software, but it could just as easily apply to a review of requirements, a design or some documented test cases. The thought processes are the same. The actual outcome of a “what if?” question might be a test of some software. But it could also be a search in code for a variable, a step-through of printed code listing or a decision table in a requirement or a state transition diagram. The outcome of the activity might be some knowledge, more questions to ask or some written or remembered tests that will be used to question some software sooner or later.

This is an exploratory post, by the way :O)



Tags: #exploratorytesting

Paul Gerrard My linkedin profile is here My Mastodon Account