My experiences in the Test Engineering business; opinions, definitions and occasional polemics. Many have been rewritten to restore their original content.
I'm working with Lalitkumar who edits the Tea Time With Testers online magazine. It has a large circulation and I've agreed to write an article series for him on 'Testing the Internet of Everything'. I'll also be presenting webinars to go with the articles, the first of which is here: https://attendee.gotowebinar.com/register/1854587302076137473 It takes place on Saturday 19 April at 15.30pm. An unusual time – but there you go.
Lalit has asked for questions on the article and I'll respond to these during the webinar. But questions on a more broad range of testing-related subjects, I'll write a response for the magazine. But I'll also blog these questions and answers here.
Questions that result an interesting blog will receive a free Tester's Pocketbook - if you go through the TTWT website and contact Lalit - anything goes. I look forward to soem challenging questions :O)
Last night, we announced dates for two webinars that I will present on the subject, “Story-Based Test Automation Using Free Tools”. Nothing very exciting in that, except that it’s the first time we have used a paid-for service to host our own webinar and marketed that webinar ourselves. (In the past we have always pitched our talks through other people who marketed them).
Anyway, right now (8.40 PM GMT and less than 24 hours since we started the announcements) we have 96 people booked on the webinar. Our GoToWebinar account allows us to accept no more than 100. Looks like a sell-out. Great.
Coincidentally, James Bach and Michael Bolton have revisited and restated their positions on the “testing versus checking” and “manual versus automated testing” dichotomies (if you believe they are dichotomies, that is). You can see their position here: http://www.satisfice.com/blog/archives/856.
I don’t think these two events are related, but it seemed to me that it would be a good time to make some statements that set the scene for what I am currently working on in general and the webinar specifically.
Business stories and testing
You might know that we (Gerrard Consulting) have written and promoted a software development method (http://businessstorymethod.com) that uses the concept of business stories and have created a software as a service product (http://businessstorymanager.com) to support the method. The method is not a test method, but it obviously involves a lot of testing. Testing that takes place throughout the development process – during the requirements phase, development phase, test phase and ongoing post-production phases.
Business stories are somewhat more to us than ‘a trigger for a conversation’, but we’ll use the term ‘stories’ to refer to them from now on.
In the context of these phases, the testing in scope might be called by other names and/or be part of processes other than ‘test’. Requirements prototyping, validation, (Specification by Example/Behaviour-Driven Development/Acceptance Test Driven Development/ Test-Driven Development – take your pick), feature-acceptance testing, system testing, user-testing and regression testing during and after implementation and go-live.
There’s quite a lot of this testing stuff going on. Right now, the Bach-Bolton dialogue isn’t addressing all of this in a general way, so I’m keeping a watching brief on events in that space. I look forward to a useful, informant outcome.
How we use (business) stories
In this blog, I want to talk specifically about the use of stories in a structured domain-specific language (using, for example Gherkin format (see https://github.com/cucumber/gherkin) to example (and that is a KEY word) requirements. I’m not interested in the Cucumber-specific extensions to the Gherkin syntax. I’m only interested in the feature heading (As a…/I want…/So that…) and the scenario structure (given…/when…/then…) etc. and how they are used to test in a broader sense:
Stories provide accessible examples in business language of features in use. They might be the starting point of a requirement, but usually not a full definition of a requirement. Without debating whether requirements can ever be complete, we argue that Specification by Example is not (in general) possible or desirable. See here: http://gerrardconsulting.com/index.php?q=node/596
If requirements provide definitions of behaviour in a general way, stories can be used to create examples of features described in requirements that are specific and, if carefully chosen, can be used to clarify understanding, to prototype behaviours and validate requirements in the eyes of stakeholders, authors and recipients of requirements. We describe this process here: http://gerrardconsulting.com/index.php?q=node/604
Depending on who creates these stories and scenarios and for what purpose, these scenarios can be used to feed a BDD, ATDD or Specification by Example approach. The terminology used in these approaches varies, but a tester would recognise them as a keyword-driven approach to test automation. Are these automated scenarios checks or tests? Probably checks. But these automated checks have multiple goals beyond ‘defect-detection’.
Story-based testing and automation
You see, the goals of an automated test (and let me persist in calling them tests for the time being) varies and there are several distinct goals of story-based scenarios as test definitions.
In the context of a programmer writing code, the rote automation of scenarios as tests gives the programmer a head start in their test-driven development approach. (And crafting scenarios in the language of users segues into BDD of course). The initial tests a programmer would have needed to write already exist so they have a clearer initial goal. Whether the scenarios exist at a sufficiently detailed level for programmers to use them as unit-tests is a moot point and not relevant right now. The real value of writing tests and running them first derives from:
Early clarification of the goal of a feature when defined
Immediate feedback of the behaviour of a feature when run
When the goal is understood and the tests pass, then the programmer can more safely refactor their code
Is this testing? 2 is clearly an automated test. 3 is the reusable regression test that might find its way into a continuous integration and test regime. These tests typically exercise objects or features through a technical API. The user interface probably won’t be exercised.
There is another benefit of using scenarios as the basis of automated tests. The language of the scenario (which is derived from the businesses’ language in a requirement) can be expected to be reused in the test code. We can expect (or indeed mandate) the programmer to reuse that language in the naming of their variables and objects in code. The goals of Ubiquitous Language in systems (defined by Eric Evans and nicely summarised by Martin Fowler here http://martinfowler.com/bliki/UbiquitousLanguage.html) are supported.
Teams needing to demonstrate acceptance of a feature (identified and defined by a story), often rely on manual tests executed by the user or tester. The tester might choose to automate these and/or other behaviour or user-oriented tests as acceptance regression tests.
Is that it? Automated story tests are ‘just’ regression tests? Well maybe so.
The world is going 'software as a service' and the development world moves closer to continuous delivery approaches every day. The time available to do manual testing is shrinking rapidly. In extremis, to avoid bottlenecks in the deployment pipeline (http://continuousdelivery.com/2010/02/continuous-delivery/) there may be time only to perform cursory manual testing. Manual, functional testing of new features might take place in parallel with development and automation of functional tests must also happen ahead of deployment because automated testing becomes part of the deployment process itself. Perhaps manual testing becomes a test-as-we-develop activity?
But there are two key considerations for this high-automation approach to work:
I’ve said elsewhere that Continuous Delivery is a beast that eats requirements (http://gerrardconsulting.com/index.php?q=node/608) and for CD to work, then the quality of requirements must be much higher than we are accustomed to. We use the term trusted requirements. You could say, tested and trusted. We, and I mean testers mostly, need to validate requirements using stories so the developers receive both trusted requirements and examples of features in use. Without trusted requirements, CD will just hit a brick wall faster.
Secondly, it seems to me that for the testers not to be a bottleneck, then the manual checking that they do must be eliminated. Whichever tests can be automated should be. The responsibility for automation of checking must move from being a retrospective activity to possibly a developer activity. This will free the manual testers to conduct and optimise their activity in the short time they have available.
There are several spin-off benefits of basing tests on stories and scenarios. Here’s two: if test automation is built early, then all checks can take advantage of it; if automation is built in parallel with the software under test, then the developers are much more likely to consider the test automation and build the hooks to allow it to operate effectively. The continuous automated testing provides the early warning system of continuous delivery regimes. These don't 'find bugs', rather they signal functional equivalence. Or not.
I wrote a series of four articles on 'Anti-Regression Approaches' here: http://gerrardconsulting.com/index.php?q=node/479. What are the skills of setting up regression test regimes? Not necessarily the same as those required to design functional tests. Primarily, you need automation skills and a knowledge of the internals of the system under test. Are these testing skills? Not really. They are more likely to be found in developers. This might be a good thing. Would it not be best to place responsibility for regression detection on those people responsible for introducing regressions? Maybe developers can do it better?
One final point. If testers are allowed (and I use that word deliberately) to test or validate requirements using stories in the way we suggest, then the quality of requirements provided to developers will improve. And so will the software they write. And the volume of testing we are currently expected to resource will reduce. So we need fewer testers. Or should I say checkers?
This is the essence of the “redistributed testing” offer that we, as testers, can make to our businesses.
The webinar is focused on our technical solution and is driven by the thinking above.
At the Unicom NextGen Testing show on 26th June, (http://www.next-generation-testing.com/) I'll be pitching some ideas for where the testing world is going – in around 15 minutes. I thought I'd lay the groundwork for that short talk with a blog that sets out what I've been working on for the past few months. These things might not all be new in the industry, but I think they will become increasingly important to the testing community.
There are four areas I've been working on in between travelling, conferences and teaching.
Testers and Programming
I've been promoting the idea of testers learning to write code (or at least to become more technical) for some time. In February this year, I wrote an article for the testing Planet: 'The Testers and Coding Debate: Can We Move on Now?' That suggested we 'move on' and those testers who want to should find learning code an advantage. It stirred up a lively debate so it seems that the debate is not yet over. No one is suggesting that learning how to write code should be compulsory, and no one is suggesting that testers become programmers.
My argument is this: for the investment of time and effort required, learning how to write some simple code in some language will give you a skill that you might be able to use to write your own tools, but more importantly, the confidence and vocabulary to have more insightful discussions with developers. Oh, and by the way, it will probably make you a better tester because you will have some insider knowledge on how programmers work (although some seem to disagree with that statement).
Anyway, I have taken the notion further and proposed a roadmap or framework for a programming training course for testers. Check this out: http://gerrardconsulting.com/?q=node/642
Lean Python
Now, my intention all along in the testers and programming debate was to see if I could create a Python programming course that would be of value for testers. I've been a Python programmer for about five years and believe that it really is the best language I've used for development and for testing. So, I discussed with my Tieturi friends in Finland, the possibility of running such a course in Helsinki and I eventually ran it in May.
In creating the materials, I initially thought I'd crank out a ton of powerpoint and some sample Python code and walk the class through examples. But I changed tack almost immediately. I decided to write a Python programming primer in the Pocketbook format and extract content from the book to create the course. I'd be left with a course and a book (that I could give away) to support it. But then almost immediately, I realised two things:
Firstly, it was obvious that to write such a short book, I'd have to ruthlessly de-scope much of the language and standard functions, libraries etc.
Second – it appeared that in all the Python programming I've done over the last five years, I only ever used a limited sub-set of the language anyway. Result!
And so, I only wrote about the features of the language that I had direct experience.
Now, it quickly occurred to me that I really did not know where all this Big Data was coming from. There were hints from here and there, but it subsequently became apparent that the real tidal wave that is coming is the Internet of Things (also modestly known as the Internet of Everything)
So I started looking into IoT and IoE and how we might possibly test it. I have just completed the second article in a series on Testing the Internet of Everything for the Tea Time with Testers magazine. In parallel with each article, I'm presenting a webinar to tell the story behind each article.
In the articles, I'm exploring what the IoT and IoE are and what we need to start thinking about. I approach this from the point of view of a society that embraces the technology. Then look more closely at the risks we face and finally how we as the IT community in general and the testing community in particular should respond. I'm hopeful that I'll get some kinf of IoE Test Strategy framework out of the exercise.
Over the past four years, since the 'testing is dead' meme, I've been saying that we need to rethink and re-distribute testing. Talks such as “Will the Test Leaders Stand Up?” are a call to arms. How to Eliminate Manual Feature Checking describes how we can perhaps eliminate, through Redistributed testing, the repetitive, boring and less effective manual feature checking.
It seems like the software development business is changing. It is 'Shifting Left' but this change is not being led by testers. The DevOps, Continuous Delivery, Behaviour-Driven Development advocates are winning their battles and testers may be left out in the cold.
Because the shift-left movement is gathering momentum, Big Data and the Internet of Everything are on the way, I now believe that we need a New Model of Testing. I'm working on this right now. I have presented draftsof the model to audiences in the UK, Finland, Poland and Romainia and the feedback has been really positive.
You can see a rather lengthy introduction to the idea on the EuroSTAR website here. The article is titled: The Pleasure of Exploring, Developing and Testing. I hope you find it interesting and useful. I will publish a blog with the New Model for Testing soon. Watch this space.
That's what's new intesting for me. What's new for you?
I came across an article that I wrote in August 2004. I think I wrote it for the BCS SIGIST Tester newsletter. For (my own) historical record, I thought I would post it to the website.
It's a 'Personal Review of the Testing Market'. I don't offer it here as a visionary statement but just as something that can be used to provide a partial perspective on how far (or how little, depending on your point of view) we have come in the last decade or so.
If I wrote a similar review now, the article would be considerably different (my interests have changed somewhat). But if I (or you) wrote about the same subjects mentioned in the article today, what do you think has changed?
Last month, I presented a webinar for the EuroSTAR conference. “New Model Testing: A New Test Process and Tool” can be seen below. To see it, you have to enter some details – this is not under my control, but the EuroSTAR conference folk. You can see the slides below.
The value of documentation prepared in structured/waterfall or agile projects is often of dubious value. In structured projects, the planning documentation is prepared in a knowledge vacuum – where the requirements are not stable, and the system under test is not available. In agile projects – where time is short and other priorities exist – not much may get written down anyway. I believe the only test documentation that could be captured reliably and be trusted must be captured contemporaneously with exploration and test.
The only way to do this would be using a pair tester or a bot to capture the thoughts of a tester as they express them. I've been working on a prototype robot that can capture the findings of the tester as they explore and test. The bot can be driven by a paired tester, but it has a speech recognition front-end so it can be used as a virtual pair.
From using the bot, it is clear that a new exploration and planning metaphor is required – I suggest Surveying – and we also need a new test process.
In the webinar, I describe my experiences of building and using the bot for paired testing and also propose a new test process suitable for both high integrity and agile environments. The bot – codenamed Cervaya™ – builds a model of the system as you explore and captures test ideas, risks and questions and generates structured test documentation as a by-product.
If you are interested in collaborating - perhaps as a Beta Tester - I'd be delighted ot hear from you.
Many thanks to the Eurostar folk allowing me to present the webinar, “Live Specifications: From Requirements to Automated Tests and Back”. This talk describes how we think companies can implement continuous delivery and live specifications using the Behaviour-Driven Development approach and redistributed testing.
There were some interesting questions posed at the time some of which I answered, but several I didn't. I've finally got around to writing some notes against each. See below...
Q: How to convince client to agile?
Well, the reasons for going Agile are well understood now. The critical arguments (from my point of view) are:
Breaking larger projects into smaller ones reduces the complexity and risks of managing delivery – if a small project fails – it fails fast and cheaply.
Agile encourages the use of autonomous multi-skilled teams who communicate well. Decision making delays are much reduced, feedback is fast so problems are corrected quicker and more cheaply.
An on-site customer (often known as the product owner) provides day to day direction for projects, so technical people don’t go off track, and get feedback on ‘the right thing to do’ quickly.
Breaking complex functionality into stories allows stories to be prioritised by business value (by the on-site customer). As a consequence, the team always delivers value as early as possible and are not distracted by lower value activities.
The morale of Agile teams is usually much higher than other organisations because all members are involved, well-informed and accountable for progress towards delivery.
Agile does not necessarily reduce the amount of rework, but brings it forward so the consequences of defects are minimised. Further, refactoring is usually an explicit task for developers in Agile teams, so the reword due to redesign and discovery of better ways to do things is visible and manageable.
Q: BDD works very well for testing stories and component testing, e.g. web services, but when considering end-to-end scenario testing, BDD gets very messy. What do you suggest when testing scenario based end-to-end tests?
True enough. The current BDD tools focus very much at a story or feature-level testing and automation but do not support the ‘end to end’ testing required to test integration and consistency between features.
The BDD approach needs to align more with the increasing use of workflow and story-boarding approaches used by teams building non-trivial systems. Essentially, feature test scenarios need to be mapped to steps in a workflow so they become an executable end to end test procedure. Now, this is relatively easy with manual tests run by intelligent testers. But there are some specific challenges with automating these end to end tests.
Existing BDD tools can usually work with GUI automation tools, but more often, they are used with unit test frameworks. In general, they fit more naturally with developers’ feature testing than system or user acceptance testing. In order to create end to end tests:
Each feature/scenario test requires a context to be defined by the controlling workflow OR the feature/scenario needs keywords/macros that can be called to prepare the feature for use in different scenarios. It’s not yet clear which approach is best (or even viable, without too much manual effort).
Existing keyword-driven test tools and frameworks map nicely to BDD. It seems to me that the keyword driven approach for navigation through (potentially complex) workflows is the most likely path to success with end to end automation. Watch out for BDD capabilities in existing frameworks e.g. Robot Framework has a BDD capability.
In our tool, Business Story Manager, we already have the capability to create process-paths or workflows with steps mapped to features and scenarios. We have ‘integration to Robot Framework’ on our roadmap. Other integrations will follow.
Overall however, we expect most automated testing to focus on feature-based checking, rather than complex end to end testing with obscure paths.
Q: Does a BDD dictionary replace an acceptance test plan ?
In a word, no. I’m not sure I understand the question. In the Business Story Method and supporting technology, the dictionary has several core features:
To capture terms (and concepts) that are relevant to an application and requirements. These may be business or technical terms, abbreviations or any concept, even business rules.
To index the usage of these terms in requirements, stories and tests so they can be traced and coverage analyses made.
We extend this in our tool to cover data item names in scenario outlines. Where scenarios contain <placeholders> for data provided as tabulated examples we can trace the usage of these data items and in effect create a simple, but effective data dictionary for them.
Overall – the goals of the dictionary are to promote a ubiquitous language, support traceability, impact analysis and provide requirements and story coverage measures that are based on the language used in the business domain.
Q: Could you name again the BDD tools you mentioned as an example at the beginning of the webinar?
Q: In BDD, when scenarios become more and more, how to organize them effectively?
It’s an obvious problem with stories, once you create non-trivial systems using them. We have heard tales of companies managing 20,000 stories in separate feature files in a single directory, and that’s a scary prospect. One way with existing ‘file-based’ products would be to create a consistent file naming convention and directory structure. But this is really cumbersome.
The Relish website http://relishapp.com offers a simple repository service for your stories (and other documentation) and is free for ‘open’ projects. Expect other services like this to spring up.
Q: In regards to production vs going live, how to you avoid customers being exposed to changes in production?
Increasingly, the continuous delivery model implies that software and changes are delivered into a production (or production-like) environment for testing prior to go live. There are several approaches that can be used including:
Feature toggling (http://en.wikipedia.org/wiki/Feature_toggle) essentially, a flag in the software, can be set to turn features (that may not be complete) on or off at will.
So called ‘dark releases’ – whereby features are released to production but are not advertised or accessible through the normal user interface, but could be accessed by testers using special urls and passwords, for example. Sometimes called ‘back doors’, these are useful for testers but are also of interest to hackers J
A ‘blue-green’ release implements two parallel environments in production. One is live and the other is for evaluation. When the evaluation version has been checked out, routers can be flipped to reroute traffic to the new version (and flipped back in case of problems).
Limited releases and Canary releases are where a software version is released to only a subset of users in a particular country, region or network or to a subset of production servers for evaluation. Users may or may not be aware of such trials. If a problem occurs, only a subset of your users are affected (and a rapid rollback can be effected).
In all cases, a reliable and rapid roll-back facility is required to avoid the wrath of users affected by a faulty release. These approaches are discussed in Jez Humble and David Farley’s excellent book, “Continuous Delivery”.
Q: So you suggest testers to be more like business analysts and to help product owners to refine requirements, right?
I’ve suggested that business analysts and testers should look to using a critical thinking discipline to prepare stories and scenarios to illustrate features, derived from requirements. Scenarios can be used to drive reviews of requirements by exampling them. The DeFOSPAM approach is a simple method of deriving stories and scenarios for this purpose.
Now, testers are not alone in having critical thinking skills or a touch of scepticism. It may be that business analysts perform the story-generation and requirements validation role. But the scenarios can also define a set of required acceptance tests for features. In this case, the testers might be better placed to create them.
Our recommendation is that stories and scenarios are created to achieve this dual goal. By so doing, requirements are improved, developers get concrete examples of features in use and a minimal set of test cases they can choose to automate through a BDD approach. In principle, all feature checks could be provided as scenarios for developers to automate so that later testers can be relieved of some or all of this error-prone chore.
There is an opportunity for testers to perform early story and scenario preparation to reduce the amount of later manual checking. The time saved may mean fewer testers are needed. What better incentive is there to get involved early?
Q: What if the requirements are potential needs that users might require? No one will know until it is out in the real world.
If a requirement describes a ‘nice to have’, sham, vanity or useless feature and it is not challenged at the requirements stage, then it is possible that it will be developed and delivered, tested and deployed – and never used. This is more likely in a structured or outsourced development. In an agile team, one would have thought this is much less likely.
Regardless of approach however, the task of creating a feature description and scenarios is like a paper prototype. The “as a.. I want … so that …” construct drives out an owner, a need and a purpose for the feature. It challenges the stakeholder motives: “who will use it? What does it do? And why?” The chances are this highlights a dud feature, or at least challenges its viability.
If a feature makes it through this stage, then the “given … when … then …” scenarios provide an opportunity challenge the feature using real-world concrete examples. The sceptical analyst, developer or tester can use the scenarios to ask, “what if…?” and suggest some good, bad, contradictory, anomalous or just plain meaningless consequences.
The act of story creation can be viewed as a ‘thought experiment’ to speculate on how the feature may stand up to real world operation. It is, in effect, a proxy or prototype, much cheaper to test than a delivered feature for acceptance.
Q: What if you have a vague requirement? What if requirements change over time?
In the case of a vague requirement and the developer has nothing better to work from, then the developer is likely to guess at the requirement, or make unsafe assumptions, or invent solutions that bear little relation to the real need. Perhaps they know the business domain and deliver something useful. But the risk of getting it completely wrong is probably too high to bear.
Creating stories to illustrate a feature specified in a requirement triggers some difficult questions for stakeholders. The DeFOSPAM mnemonic represents seven key questions:
What do the words of the requirement mean? (Definition)
What features are being specified? (Features)
What outcomes can be identified? (Outcomes)
What situations or scenarios must the feature deal with? (Scenarios)
Does the requirement predict sensible outcomes? (Prediction)
Is the text of the requirement ambiguous? (Ambiguity)
Has anything been left out of the requirement? (Missing)
The DeFOSPAM process forces the stakeholder to articulate a requirement more clearly.
To the question, “What if the requirement changes over time?” one has to ask in response, “Why does it change?”
In some cases, the business need changes. Well, there’s not much we can do about that one except negotiate the changes and cost of change with stakeholders. More often, requirements change because the perception of the need changes. One could look at a software project as a learning experience for stakeholders. They might propose some vague requirements initially, and wait for the developers to deliver some software for the stakeholders to evaluate. Having some experience of a solution in use, the stakeholders then say, “that’s great, but what I really want is…”
This process works, but my, it is frustrating for everyone on the team, especially the developers and testers whose time is wasted. How can we shortcut this process? By giving stakeholders examples of the proposed system in use through a prototype, wireframes and examples. Creating stories and scenarios coupled with steps in a workflow, with perhaps wireframes to give them a feel for what features might look like can trigger the same learning process, but at a much lower cost.
So, the answer is really to accelerate the learning process by providing rapid, meaningful feedback through example and improve the requirements and associated stories so they are trusted. Trusted does not mean perfect. In the Business Story Pocketbook (p 31), “A trusted requirement is one that, at this moment in time, is believed to accurately reflect the users’ need, and sufficiently detailed to be developed and tested”.
Q: What is the best tool for requirement management? What are the most important features a tool should have to be used in SDLC?
There are many, many RM tools out there, and it would be unreasonable to promote any one of them, and requirements can comprise many different models as well as the most common textual description. But from the ‘Live Specification’ point of view and covering functional requirements only (and excluding the obvious needs to be easy to use, flexible etc.) these features seem to be key:
The functional requirement encompasses a ‘statement of need or business rule’ (the traditional requirement) plus associated features, scenarios and examples. Think of this as ‘specification AND example’.
A dictionary of terminology, abbreviations, synonyms, antonyms, data names, business rules and an index of use throughout the requirements content.
Traceability from business goals through requirements, features scenarios and examples to allow business impact analysis. A change at any level can be traced up or down the hierarchy.
Reporting of requirements content in a format understandable to stakeholders, at any level of the hierarchy so that a review of goals versus requirements, requirements versus features/scenarios can be supported.
Integration of features and scenarios with a test execution management, test automation framework or BDD tool so that requirements content drives testing and development, and test status can be reflected up the hierarchy from scenarios through features, requirements and business goals.
Needless to say, accessibility to everyone in a business or project team, version and change history, access control.
Q: How will the business see “more testing” in the initial phase? How to convince the business that tester will now start “testing” requirements and less integration tests (after the development team delivers)?
I’ve tried to explain above how the exampling and validation of requirements using business stories has some significant benefits. The trade-off is that it requires a change of emphasis in the behaviours of analysts, developers and testers and collaboration between all three. But you know what? Productive Agile teams have reminded us that software development is most effective when it is treated as a collaborative process.
The change required is that Testers need to collaborate with both business analysts (to be involved earlier and contribute their critical skills to creating stories and scenarios) and with developers (to provide coherent scenarios for automation of feature checks).
The payoffs are significant:
Business stories provide early feedback so vague, ambiguous and incomplete requirements can be trusted to reflect the true need.
Business stories and scenarios provide clear ‘feature acceptance criteria’ but can also be a covering set of feature checks that can be automated by professional developers.
Automated feature checks support both the Behaviour- and Test-Driven Development approaches so development is less error-prone; they also provide candidate regression tests to be used for the lifetime of the feature.
Automated feature checking reduces the need for labour intensive and error-prone, expensive (and boring) manual checking so test teams can focus on the more subtle end to end and obscure tests.
Delivery timescales become more reliable and the quality of delivered software is higher.
Using tools that provide the traceability, better business impact analysis becomes possible – changes can be properly evaluated – at last.
New and meaningful test coverage measures based on the language and terminology used by stakeholders become viable.
There are an increasing number of companies moving towards continuous delivery, live specification, BDD and redistributed testing etc. To my knowledge, there are no objective, comparative studies or metrics demonstrating benefits and value for money of these approaches. I doubt there ever will be. But a growing number of practitioners are acquiring positive experiences of these approaches and environments. Well-known software gurus and fast-growing service companies are promoting them with vigour.
I have to say, with this momentum and backing, you should expect these methods to proliferate over the next few years.
The BBC asked me to pitch an 'interesting talk' to an audience at the BBC Radio theatre in Portland Place, London. The BBC was running a Digital Open Day aimed at recruiting people. My contribution was an introduction to the New Model for Testing and you can play the video here. Except you can't as they seem to have withdrawn that and many other videos, for some reason.
It was quite a thrill to be on the stage of the Radio Theatre. When I was a kid, I used to avidly listen to the BBC Radio 1 'In Concert' live shows on a Saturday night. I just discovered that the BBC have catalogued these shows (and many others) here: http://www.bbcradioint.com/ContentFiles/In_Concert_Catalogue.pdf As you can see, there's a stellar list of performers.
This tutorial suggests that rather than being a document, test strategy is a thought process. The outcome of the thinking might be a short or a long document, but most importantly, the strategy must address the needs of the participants inside the project as well as the customers of the product to be built. It needs to be appropriate to a short agile project or to a 1000 man-year development. It has to have the buy-in of stakeholders but most importantly, it must have value and be communicated.
This tutorial presents a practical definition of a Test Strategy, provides a simple template for creating one and describes a systematic approach to thinking the right way. This will be an interactive session. Bring your test strategy problems with you – we'll try and address them during the day. You will receive a free copy of the Tester's Pocketbook.
Dates & Venues
19 February 2013 – London
09 April 2013 – London
21 May 2013 – London
16 June 2013 – London
10 September 2013 – London
22 October 2013 – London
10 December 2013 – London