Paul Gerrard

My experiences, opinions in the Test Engineering business. I am republishing/rewriting old blogs from time to time.

First published 10/12/2010

I am proud and honoured to have received the Eurostar European Testing Excellence award for 2010. I’m particularly grateful to Geoff Thompson who proposed me, Graham Thomas who encouraged Geoff to put the effort in and my business partner Susan Windsor for putting up with me. Of course, I would like to thank the friends, colleagues and customers who provided references for the submission. Needless to say, I also owe a huge debt to my wife Julia and family.

To be singled out for the award is very special but I want to emphasise that I am part of a large community of testers. It is an honour to be associated with such a group of people in the UK, Europe and worldwide who are so generous with their time to challenge and to share their knowledge. In this respect, Testers seem to me to be unique in the IT industry.

Thank-you all once again.

Tags: #Eurostar #testingexcellenceaward #awards

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account

First published 26/05/2011

Test Assurance is an evolving discipline that concerns senior testing professionals. Unfortunately, there isn't an industry-wide definition of it. Even if there was one, it would probably be far too vague.

This group aims to provide a focus for people who are active in the space to share knowledge, but also to senior folk who are looking for a potential career 'upgrade path'. By and large, test assurance pros are expert in tests, but sit above the fray. Their role is to assess, review, audit, understand, challenge testing but not usually to conduct it. That is (as was written in one of my TA briefs)...

Test Assurance has no responsibility for delivery.

TA as an engagement might be a full time internal role in one project or a programme of projects, engaged from the beginning and with a scope of influence across requirements through to acceptance testing, performed internally or by suppliers.

A variation of this role would be to provide oversight of a project from an external point of view. In this case, Test Assurance might report to the chair of a programme management board – often a business leader.

But an alternative engagement might be as a testing trouble-shooter where a (usually large) project has a 'testing problem'. A rapid review and report, with recommendations, presented to at least project board level is the norm.

There are wide variations on these themes.

So my question in this discussion is – what is your experience/view of Test Assurance? Let's hear your comments – perhaps we can create a TA scope or terms of reference so we can define the group's focus.

Here is the link: http://www.linkedin.com/groups/Test-Assurance-3926259

Do join us.

Tags: #testassurance #linkedin

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account

First published 06/11/2009

Ten years ago, the Internet was a relatively small, closed networkused by defence and academic organisations in the US. When Mosaic, a graphical Web browserappeared in 1994 and became widely-available, the explosive popularity of the Net began,and continues today. In August 1999 the number of people connected to the net was 195m andthis is expected to be 250m by the end of the millennium. In the UK around 12m people or20% of the population of all ages will have access when the new Millennium dawns. If youhave a PC and modem the cost of connection is the cost of a local telephone call.

Because the on-line market is world-wide, unrestricted, vast (andstill growing), the emergence of electronic commerce as a new way of conducting businessgathers momentum. E-commerce has been described as the last gold-rush of the millennium.Since the cost of entry into the e-commerce marketplace is so low, and the potentialrewards so high, business to business and business-consumer vendors are scrambling tocapture maximum market share.

Although the cost of entry into the market is low, the risk offailure in the marketplace is potentially very high. The web sites of traditional vendorswith strong brand names have not automatically succeeded, and there have been some notablefailures. Many of the largest e-commerce sites were completely unknown start-up companiesthree years ago. E-commerce systems have massive potential, but with new technology comenew risks, and testing must change to meet the needs of the new paradigm. What are therisks of the e-commerce systems?

The typical e-commerce system is a three-tiered client/serverenvironment, database (often legacy system) servers working with application or businessservers, fronted by web servers. Given this basic structure, many other special purposeservers may also be involved: firewalls, distributed object, transaction, authentication,credit card verification and payment servers are often part of the architecture. The Webis the vehicle by which the promise of client/server will finally be realised.

Many of the risks faced by e-commerce developers are the same as forclient/server, but there are important differences. Firstly, the pace of development onthe web is incredibly quick. 'Web-time' describes the hustle that is required tocreate and maintain momentum. Few systems are documented adequately. The time from a newidea and deployment onto the Web may only be a few weeks. Enhancements may be thought ofin the morning to be deployed in the afternoon. Some sites, for example a TV news channelsite, must provide constantly changing, but up to date content 24 hours a day, 365 days ayear.

You have no control over the users who visit and use your site. Yourusers may access your site with any one of 35 different browsers or other web devices(will your site work with them all?). There is no limit to how many people can access yoursite at once (will your site crash under the load?). Users will not be trained, many maynot speak your language, some may be disabled, some blind (will they find your siteusable, fast, useful?). Some of your users will be crooks (can your site withstand ahacker's attack?). Some of your users may be under-age, (are you selling alcohol tominors?) Whether you are in retail or not, one way of looking at the way people use youre-commerce site is to compare it with a traditional retail store.

Anyone can visit your store, but if your doors are shut (the site isdown); if the queue to pay is too long; if the customer cannot pay the way they want to;if your price list is incomplete, out of date, or impossible to use, your customers willgo elsewhere. E-commerce site designers must design to provide their users with the mostrelaxed, efficient and effective web-experience possible. E-commerce site testers, mustget inside the heads of users and create test scenarios that match reality.

What are the imperatives for e-commerce testers? To adopt a rapidresponse attitude. To work closely with marketeers, designers, programmers and of coursereal users to understand both user needs and the technical risks to be addressed intesting. To have a flexible test process having perhaps 20 different test types that covereach of the most likely problems. To automate as much testing as possible.

Whether home-grown or proprietary, the essentials tools are testdata and transaction design; test execution using the programmer and user interfaces;incident management and control to ensure the right problems get fixed in the right order.Additional tools to validate HTML and links, measure download time and generate loads areall necessary. To keep pace with development, wholly manual testing is no longer anoption. The range of tools is large required but most are now available.

Paul Gerrard, 12 September 1999.

Tags: #e-commerce #Risks

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account

First published 09/02/2012

Atlassian kindly asked me to write a blog post on testing for their website. I wrote a longer, two-part article that appears here and here. I have combined the two articles into a single blog post here.

Testing is Dead?

Recently, there has been a spate of predictions of doom and gloom in our business. Conference talks have had titles such as ‘Test is Dead’ and ‘Death to the Testing Phase’. ‘Testing has contributed little to quality improvement in the last ten years’, and even being a tester is a ‘bad thing’ – are all keynote themes that circulated at conferences, blogs and YouTube in late 2011.

My own company has been predicting the demise of the ‘plain old functional tester’ for years and we’ve predicted both good and bad outcomes of the technology and economic change that is going on right now. In July I posted a blog, ‘Testing is in a Mess’ where I suggested that there's complacency, self-delusion and over capacity in the testing business; there is too little agreement about what testing is, what it’s for or how it should be done.

There are some significant forces at play in the IT industry and I think the testing community, at least the testers in what might be called the more developed ‘testing nations’ will be coming under extreme pressure.

The Forces and Factors that will Squeeze Testers

The growth of testing as a career

Over the last twenty years or so there has been a dramatic growth in the number of people who test and call themselves testers and test managers. When I started in the testing business in 1992 in the UK, few people called themselves a tester, let alone thought of themselves as having a career in testing. Now, there are tens of thousands in the UK alone. Twenty years ago, there were perhaps five companies offering testing services. There must be ten times that number now and there are hundreds, perhaps thousands of freelancers who specialise and make a living in testing. Beyond this of course, the large system integration and outsourcing firms have significant testing practices with hundreds or even thousands of staff, many offshored of course.

It’s not that more testing happens. I think it’s because the people who do it are now recruited into teams, having managers who plan, resource and control sizable budgets in software projects to perform project test stages. Many ‘career testers’ have never done anything else.

Lack of advance in the discipline

The sources and sheer volume of testing knowledge have exploded. There are countless papers, articles, blogs and books available now, and there are many conferences, forums, meet-ups and training courses available too. But, even though the volume of information is huge, most of it is not new. As a frequent conference goer over 20 years, it depresses me that the innovation one sees in conferences, for example, tends to be focused on the testers’ struggle to keep pace with and test new technologies rather than insights and inventions that move the tester’s discipline forward.

Nowadays, much more attention is paid to the management of testing, testers and stakeholders expectations and decision making. But if you consider the argument that perhaps test management is a non-discipline, that is there is no such thing as test management, there’s just management, and you take the management away from test management – what’s left? Mostly challenges in test logistics – or just logistics – and that’s just another management discipline?

Advances(?) in Automation

What about the fantastic advances in automation? Let’s look at the two biggest types of test automation.

Test execution robots are still, well, just robots. The advances in these have traced the increased complexity of the products used to build and deliver functionality. From green-screen to client/server to GUI to Web, to SOA, the test automation engineer of 1970 (once they got over the shock of reincarnation) would quickly recognise the patterns of test automation used today. Of course, the automation frameworks are helping to make test automation somewhat more productive, but one could argue that people have been building their own custom frameworks for years and years and they should have been mainstream long ago.

The test management tools that are out there are fantastic. Integrated test case management, scheduling, logging and incident management and reporting. Except that the fundamental purpose of these tools is basic record-keeping and collaboration. Big deal. The number of companies who continue to use Excel as their prime test management tools shows just how limited the test management tools are in what they do. Most organisations get away without test management products altogether because these products support the clerical test activities and logistics, but do little or nothing to support the intellectual effort of testers.

The Emergence/Dominance of Certification

The test certification schemes have gone global it seems. Dorothy Graham and I had an idea for a ‘Foundation’ certification in 1997 and we presented a one page syllabus proposal to an ad-hoc meeting at the Star WEST conference in San Jose to gauge interest. There wasn’t much. So we came back to the UK, engaged with ISEB (not part of BCS in those days) and I became the founding Chair of the initial ISEB Testing Board. About ten or so UK folk kicked off the development of the Foundation scheme which had its first outing in late 1998.

As Dorothy says on her blog (http://dorothygraham.blogspot.com/2011/02/part-2-bit-of-history-about-istqb.html), the Foundation met its main objective of “removing the bottom layer of ignorance” about software testing. Fourteen years and 150,000 certificate awards later, it does the same. Except that for many people it’s all they need (and may ever need) to get a job in the industry.

The Agile Juggernaut

Agile is here to stay. Increasingly, developers seem to take test, Test-Driven and Behaviour-Driven Development and Specification by Example more seriously. Continuous Integration and Delivery is the heartbeat, the test, life-support and early warning system. The demands for better testing in development are being met. A growing number of developers have known no other way.

It seems likely that if this trend continues, we’ll get better, stable software sooner and much of the checking done late by system testers will not be required. Will this reduce the need for system testers? You bet.

Some Agile projects don’t use testers – the testers perform a ‘test assurance’ role instead. The demand for unskilled testers reduces and the need for a smaller number of smarter testers with an involvement spread over multiple projects increases. Again – fewer testers are required.

What is the Squeeze?

The forces above are squeezing testers from the ‘low-value’ unskilled, downstream role to upstream, business-savvy, workflow-oriented, UX (user experience)-aware testing specialists with new tools. Developers are absorbing a lot of checking that is automated. Some business analysts are taking their chance and absorbing test disciplines into analysis and are taking over the acceptance process too.

If a 3 day certification is all you need to be a professional tester, no wonder employers think testing is a commodity, so will outsource it when they can.

Stakeholders know that avoiding defects is better than finding them. Old-style testing is effective but happens at the end. Stakeholders will say, “Let’s take requirements more seriously; force developers to test and outsource the paperwork”.

Smart testers need to understand they are in the information business, that testing is being re-distributed in projects and if they are not alert, agile even, they will be squeezed out. Needless to say, the under-skilled testers, relying on clerical skills to get by will be squeezed out.

A Methodological Shift

There seems to be a methodological shift from staged, structured projects to iterative and Agile and now, towards ‘continuous delivery’. Just as companies seem to be coming to terms with Agile – it’s all going to change again. They are now being invited to consider continuous ‘Specification by Example’ approaches. Specification by example promotes a continual process of specification, exampling, test-first, and continuous integration.

But where does the tester fit in this environment?

The Industry Changes its Mind – Again

So far, I’ve suggested there were four forces that were pushing testers out of the door of software projects (and into the real world, perhaps). Now, I want to highlight the industry changes that seem to be on the way, that impact on development and delivery and hence on testing and testers. After the negative push, here’s the pull. These changes offer new opportunities and improve testers’ prospects.

Recent reports (IBM’s ‘The Essential CIO’ 2011 study and Forrester’s ‘Top 10 Technology Trends to Watch’) put Business Intelligence, adoption of cloud platforms and mobile computing as the top three areas for change and increased business value (whatever that means).

Once more, the industry is in upheaval and is set for a period of dramatic change. I will focus on adoption of the cloud for platforms in general and for Software as a Service (SaaS) in particular and the stampede towards mobile computing. I’m going to talk about internet- (not just web-) based systems rather than high integrity or embedded systems, of course.

The obvious reason for moving to the cloud for Infrastructure as a Service (IaaS) and regardless of the subtleties of capex v opex costs, the cost advantage of moving to cloud-based platforms is clear. “Some of this advantage is due to purchasing power through volume, some through more efficient management practices, and, dare one say it, because these businesses are managed as profitable enterprises with a strong attention to cost” (http://www.cio.com/article/484429/Capex_vs._Opex_Most_People_Miss_the_Point_About_Cloud_Economics). So, it looks like it’s going to happen.

Moving towards IaaS will save some money. The IT Director can glory in the permanent cost savings for a year – and then what? Business will want to take advantage of the flexibility that the move to the cloud offers.

The drift from desktop to laptop to mobile devices gathers pace. Mobile devices coupled with cloud-based services have been called the ‘App Internet’. It seems that many websites will cease to be and might be replaced by dedicated low-cost or free Apps that provide simple user interfaces. New businesses with new business models focusing on mobile are springing up all the time. These businesses are agile by nature and Agile by method. The pull of the App internet and Agile approaches is irresistible.

The Move to SaaS and Mobile (App) Internet

I’m not the biggest fan of blue sky forecasters, and I’m never entirely sure how they build their forecasts with an accuracy of more than one significant digit, but according to Forrester’s report “Sizing the Cloud”, the market for SaaS will grow from $21bn in 2011 to $93bn in 2016 and represent 26% of all packaged software. (http://forrester.com/rb/Research/sizing_cloud/q/id/58161/t/2).

Now 26% of all packaged software doesn’t sound so dramatic, but wait a minute. To re-architect an installed base of software and create new applications from scratch to make that percentage will be a monumental effort. A lot of this software will be used by corporates who have systems spanning the (probably private) cloud and legacy systems and the challenges of integration, security, performance and reliability will be daunting.

The Impact on Development, Delivery and Testing

Much of the software development activity in the next five years or so will be driven by the need for system users and service vendors to move to new business models based on new architectures. One reason SaaS is attractive to software vendors is that the marketing channel and the service channel are virtually same and the route to market is so simple that tiny boutique software shops can compete on the same playing field as the huge ISVs. The ISVs need to move pretty darned quick not to be left with expensive, inflexible, unmarketable on-premise products so are scrambling to make their products cloud-ready. Expect there to be some consolidation in some market sectors.

SaaS works as an enabler for very rapid deployment of new functionality and deployment onto a range of devices. A bright idea in marketing being deployed as new functionality in the afternoon seems to be feasible and some companies seem to be succeeding with ‘continuous delivery’. This is the promise of SaaS.

Many small companies (and switched-on business units in large companies) have worked with continuous delivery for years however. The emergence of the cloud and SaaS and of maturing Agile, specification by example, continuous integration, automated testing and continuous delivery methods means that many more companies can take this approach.

What Does this Mean for Software Practitioners?

Businesses like Amazon and Google and the like have operated a continuous delivery model for years. The ‘Testing is Dead’ meme can be traced to an Alberto Savoia talk at Google’s GTAC conference (http://www.youtube.com/watch?v=X1jWe5rOu3g). Developers who test (with tools) ship code to thousands of internal users who ‘test’ and then the software goes live (as a Beta, often). Some products take off; some, like Wave, don’t. The focus of Alberto’s talk is that software development and testing is often about testing ideas in the market.

Google may have a unique approach, I don’t know. But most organisations will have to come to terms with the new architectures and a more streamlined approach to development. The push and pull of these forces are forcing a rethink of how software available through the internet is created, delivered and managed. The impacts on testing are significant. Perhaps testing and the role of testers can at last can mature to what they should be?

Some Predictions

Well, after the whirlwind tour of what hot and what’s not in the testing game, what exactly is going to happen? People like predictions so I’ve consulted my magic bones and here are my ten predictions for the next five years. As predictions go, some are quite dramatic. But in some companies in some contexts, these predictions will come true. I’m just offering some food for through.

Our vision, captured in our Pocketbook (http://businessstorymethod.com) is that requirements will be captured as epic stories, and implementable stories will example and test those requirements to become ‘trusted’, with a behaviour-driven development approach and an emphasis on fully and always automated checking. It seems to us that this approach could span (and satisfy) the purist Agilists but allow many more companies used to structured approaches to adopt Agile methods whilst satisfying their need to have up-front requirements. Here are my predictions:

  1. 50% of in-house testers will be reassigned, possibly let go. The industry is over staffed by unqualified testers using unsystematic, manual methods. Lay them off and/or replace them with cheaper resource is an easy call for a CIO to make.
  2. Business test planning will become part of up-front analysis. It seems obvious, but why for all these years has one team captured requirements and another team planned the test to demonstrate they are met. Make one (possibly hybrid) group responsible.
  3. Specification by example will become the new buzzword on people’s cv. For no other reason that SBE incorporates so many buzzwordy Agile practices – Test-First, Test-Driven, Behaviour-Driven, Acceptance-Test Driven, Story-Testing, Agile Acceptance testing) – it will be attractive to employers and practitioners. With care, it might actually work too.
  4. Developers will adopt behaviour-driven-development and new tools. The promise of test code being automatically generated and executed compared to writing one’s own tests is so attractive to developers they’ll try it – and like it. Who writes the tests though?
  5. Some system tests and most acceptance tests will be business-model driven. If Business Stories, with scenarios to example the functionality, supported by models of user workflows are created by business analysts, those stories can drive both developer tests and end to end system and acceptance tests. So why not?
  6. Business models plus stories will increasingly become ‘contractual’. For too long, suppliers have used the wriggle-room of sloppy requirements to excuse their poor performance and high charges for late, inevitable changes to specification. Customers will write more focused compact requirements, validated and illustrated with concrete examples to improve the target and reduce the room for error. Contract plus requirements plus stories and examples will provide the ‘trusted specification’.
  7. System tests will be generated from stories or be outsourced. Business story scenarios provide the basic blocks for system test cases. Test detailing to create automated or manual test procedures is a mechanical activity that can be outsourced.
  8. Manual scripted system test execution will be outsourced (in the cloud). The cloud is here. Testers are everywhere. At some point, customers will lose their inhibition and take advantage of the cloud+crowd. So, plain old scripted functional testers are under threat. What about those folk who focus more on exploratory testing? Well, I think they are under threat too. If most exploration is done in the cloud, then why not give some testing to the crowd too?
  9. 50% of acceptance tests will be automated in a CI environment for all time. Acceptance moves from a cumbersome, large manual test at the end to a front-end requirements validation exercise with stories plus automated execution of those stories. Some manual tests, overseen by business analysts will always remain.
  10. Tools that manage requirements, stories, workflows, prototyping, behaviour-driven development, system and acceptance testing emerge.
Where do testers fit? You will have to pick your way through the changes above to find your niche. Needless to say you will need more than basic ‘certification level’ skills. Expect to move towards a specialism or be reassigned and/or outsourced. Business analysis, test automation, test assurance, non-functional testing or test leadership beckon.

Whither the Test Manager?

You are test manager or a test lead now. Where will you be in five years? In six months? It seems to me there are five broad choices for you to take (other than getting out of testing and IT altogether).
  1. Providing testing and assurance skills to business: moving up the food chain towards your stakeholders, your role could be to provide advice to business leaders wishing to take control of their IT projects. As an independent agent, you understand business concerns and communicate them to projects. You advise and cajole project leadership, review their performance and achievement and interpret outputs and advise your stakeholders.
  2. Managing Requirements knowledge: In this role, you take control of the knowledge required to define and build systems. Your critical skills demand clarity and precision in requirements and the examples that illustrate features in use. You help business and developers to decide when requirements can be trusted to the degree that software can reasonably be built and tested. You manage the requirements and glossary and dictionary of usage of business concepts and data items. You provide a business impact analysis service.
  3. Testmaster – Providing an assurance function to teams, projects and stakeholders: A similar role to 1 above – but for more Agile-oriented environments. You are a specialist test and assurance practitioner that keeps Agile projects honest. You work closely with on-site customers and product owners. You help projects to recognise and react to risk, coach and mentor the team and manage their testing activities and maybe do some testing yourself.
  4. Managing the information flow to/from the CI process: in a Specification by Example environment, if requirements are validated with business stories and these stories are used directly to generate automated tests which are run on a CI environment, the information flows between analysts, developers, testers and the CI system is critical. You define and oversee the processes used to manage the information flow between these key groups and the CI system that provides the control mechanism for change, testing and delivery.
  5. Managing outsourced/offshore teams: In this case, you relinquish your onsite test team and manage the transfer of work to an outsourced or offshore supplier. You are expert in the management of information flow to/from your software and testing suppliers. You manage the relationship with the outsourced test team, monitor their performance and assure the outputs and analyses from them.

Close

The recent history and the current state of the testing business, the pressures that drive the testers out of testing and the pull of testing into development and analysis will force a dramatic re-distribution of test activity in some or perhaps most organisations.

But don’t forget, these pressures on testing and predictions are generalisations based on personal experiences and views. Consider these ideas and think about them – your job might depend on it. Use them at your own risk.

Tags: #testingisdead #redistributionoftesting

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account

First published 11/10/2011

Anne-Marie Charrett wrote a blog post that I commented on extensively. I've reproduced the comment here:

“Some to agree with here, and plenty to disagree with too...

  1. Regression testing isn't about finding bugs the same way as one might test new software to detect bugs (testing actually does not detect bugs, it exposes failure. Whatever.) It is about detecting unwanted changes in functionality caused by a change to software or its environment. Good regression tests are not necessarily 'good functional tests'. They are tests that will flag up changes in behaviour – some changes will be acceptable, some won't. A set of tests that purely achieve 80% branch coverage will probably be adequate to demonstrate functional equivalence of two versions of software with a high level of confidence – economically. They might be lousy functional tests “to detect bugs”. But that's OK – 'bug detection' is a different objective.

  2. Regression Testing is one of four anti-regression approaches. Impact analysis from a technical and business point of view are the two preventative approaches. Static code analysis is a rarely used regression detection approach. Fourthly...and finally ... regression testing is what most organisations attempt to do. It seems to be the 'easiest option' and 'least disruptive to the developers'. (Except that it isn't easy and regression bugs are an embarrassing pain for developers). The point is one can't consider regression testing in isolation. It is one of four weapons in our armoury (although the technical approaches require tools). It is also over relied-on and done badly (see 1 above and 3 below).

  3. If Regression testing is about demonstrating functional equivalence (or not), then who should do it? The answer is clear. Developers introduce the changes. They understand or should understand the potential impact of planned changes on the code base before they proceed. Demonstrating functional equivalence is a purely technical activity. Call it checking if you must. Tools can do it very effectively and efficiently if the tests are well directed (80% branch coverage is a rule of thumb). Demonstrating functional equivalence is a purely technical activity that should be done by technicians.

Of course, what happens mostly is that developers are unable to perform accurate technical impact analyses and they don't unit test well so they have no tests and certainly nothing automated. They may not be interested in and/or paid to do testing. So the poor old system or acceptance testers working purely from the user interface are obliged to give it their best shot. Of course, they try and re-use their documented tests or their exploratory nous to create good ones. And fail badly. Not only are tests driven from the UI point of view unlikely to cover the software that might be affected, the testers are generally uninformed of the potential impact of software changes so have no steer to choose good tests in the first place. By and large, they aren't technical and aren't privy to the musings of the developers, before they perform the code changes so they are pretty much in the dark.

So UI driven manual or automated regression testing is usually of low value (but high expense) when intended to demonstrate functional equivalence. That is not to say that UI driven testing has no value. Far from it. It is central to assessing the business impact of changes. Unwanted side effects may not be bugs in code. Unwanted side-effects are a natural outcome of the software changes requested by users. A common unwanted effect here is for example, a change in configuration in an ERP system. The users may not get what they wanted from the 'simple change'. Ill-judged configuration changes in ERP systems designed to perform straight-through processing can have catastrophic effects. I know of one example that caused 75 man-years of manual data clean-up effort. The software worked perfectly – there was no bug. The business using the software did not understand the impact of configuration changes.

Last year I wrote four short papers on Anti-Regression Approaches (including regression testing) and I expand on the points above. You can see them here: http://gerrardconsulting.com/index.php?q=node/479

Tags: #regressiontesting #anti-regression

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account

First published 04/11/2009

Hi,

With regards to the ATM accreditation -see attached. The cost of getting accredited in the UK is quite low – UKP 300 I believe. ISTQB UK will reuse the accreditation above.

Fran O'Hara is presenting the course this week. Next week I hope to get feedback from him and I'll update the materials to address the mandatory points in the review and add changes as suggested by Fran.

I've had no word from ISTQB on availability of sample papers as yet. I'll ask again.

I have taken the ATA exam and I thought that around one third of the questions were suspicious. That is, I thought the question did not have an answer or the provided answers were ambiguous or wrong. Interestingly, there are no comments from the client on the exam are there?

If their objective is to pass the exam only, then their objective is not the same as the ISTQB scheme. The training course has been reviewed against the ATA Syllabus which explicitly states a set of learning objectives (in fact they are really training objectives, but that's another debate). The exam is currently a poor exam and does not examine the syllabus content well. It certainly is not focused on the same 'objectives' as the syllabus and training material. If the candidates joined the course thinking the only objective was to pass the exam, then they will not pay attention to the content that is the basis of the exam. I would argue that the best way to pass the exam is to attend to the syllabus. The ‘exam technique’ is very simple – and the same as the Foundation exam. A shortage of test questions should not impair their ability to pass the exam. The exam is based on the SYLLABUS. The course is based on the SYLLABUS.

Here's my comments on their points – in RED.

  • The sessions were not oriented to pass the exam. They were general testing lessons� the main objective of the training should be to prepare the assistants for the examination. That is not the intention of the ISTQB scheme. If we offered a course that focused only on passing the exam we would certainly lose our accreditation. Agree that a sample paper is required ((ISTQB to provide). It is extremely hard to prepare course material for the exam without having a sample paper. Although I have taken the exam (and found serious fault with it) I have not got a copy and was not allowed to give feedback. Most of the dedicated time in the training was not usable to pass the exam: The training was more oriented to test management than test analyst, which was the objective. I don’t know if this is true of the material, or the way you presented it. Since the course is meant to be advanced and not basic, the material will be more focused on the tester making choices rather than doing basic exercises. The syllabus dedicates three whole days to test technques – not management specific material. For example: a lot of time dedicated to risk management theory and practice and the specific weigh in the exam for that section was not so high. True. The section on risk based testing is too long and needs cutting down.
  • More exercises needed: the training included some exercises but were similar to the foundation level ones. The training provider must be responsible to find and include advanced exercises. The exercises are similar to the Foundation course exercises because the Foundation syllabus is reused. The difficulty of the ATA exercises is slightly higher. However, because the exam presents multiple choice answers, the best technique for obtaining the correct answer may not be how one tests. This is a failure of the exam not the training material. (Until we get a sample paper, how can we give examples of exam questions?) o Examples of exercises: ? For an specific situation: How many test conditions.. using this test technique??? Not sure I understand. Is the comment, “can we have exercises that ask, how many conditions would be derived using a certain technique?” Easily done – just count the conditions in the answer. ? From our experience the exercises included in the exam were similar to the basic one but more complex. Are they saying the ATA exam was like the Foundation exam – but more difficult? That is to be expected. Perhaps we provide some exercises from Foundation materials but make them a little more involved. There are a small number, but I agree we need to provide a lot more.
  • The training would include more reference to the foundation level Er not sure what this means. Could or should? Are they asking for more content lifted from Foundation scheme to be included in the course? In fact, much of the reusable material is already in the course (it’s much easier to reuse rather than write new!) Not sure what they are asking here.
  • Sample exams needed Agreed!
  • A lot of time dedicated in the sessions to theory than can be just self studied by assistants: i.e. Quality attributes This is possible. Perhaps we could extract content from the syllabus and publish this as a pre-read for the course? There are some Q&A in thehandouts already, but more could be added. However, a LOT of the syllabus could be treated this way.
  • More practical needed for the following modules: o Defect management Isn’t this covered in the Advanced Test Management syllabus? (They want LESS management don’t they?) o Reviews: in the training we covered theory (types, roles�) but not practical questions like the exam�s We don’t know what the review questions in the exam look like. They are unlikely to be ‘practical’.

The general conclusion is that the training should be pass exam oriented. See my comment above. If this is REALLY what they want – they do not need a training course. They should just memorise the syllabus, since that is what the exam is based on. Some of the comments above, I think are legitimate and we need to add/remove/change content in the course. Some of the ATM material could be reused as it is possibly more compact. (Risk, incidents, reviews). Yes we need more sample questions – agreed! But I think some of the comments above betray a false objective. If we taught an exam-oriented course they would pass the exam but not learn much about testing. This is definitely NOT what the ISTQB scheme is about. However, people like Rex Black are cashing in on this. See here: https://store.rbcs-us.com/index.php?option=com_ixxocart&Itemid=6&p=product&id=16&parent=6 What will you suggest to the client re: getting their people through the exams? I hope some of the text above will help. If you do have specific points (other than above) let me know. I will spend time in the next 2-3 weeks updating the materials.

Tags: #ALF

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account

First published 18/09/2010

This is the first in a series of short essays in which I will set out an approach to test design, preparation and execution that involves testers earlier, increases their influence in projects, improves baseline documents and stability, reduces rework and increases the quality of system and acceptance testing. The approach needs automated support and the architecture for the next generation of test management tools will be proposed. I hope that doesn’t sound too good to be true and that you’ll bear with me.

Some scene-setting needs to be done...

In this series, I’m focusing on contexts (in system or acceptance testing) where scripted tests are a required deliverable and will provide the instructions in the form of scripts, procedures (or program code) to execute tests. In this opening essay, I’d like to explore why the usual approach to building test scripts (promoted in most textbooks and certification schemes) wastes time, undermines their effectiveness and limits the influence of testers in projects. These problems are well-known.

There are two common approaches to building scripted tests:

  1. Create (manually or automated) test scripts directly from a baseline (requirement or other specification documents). The scripts provide all the information required to execute a test in isolation.
  2. Create tabulated test cases (combinations of preconditions, data inputs, outputs, expected results) from the baseline and an associated procedure to be used to execute each test case in turn.
By and large, the first approach is very wasteful and inflexible and the tests themselves might not be viable anyway. The second approach is much better and is used to create so called ‘data-driven’ manual (and automated) test regimes. (Separating procedure from data in software and tests is generally a good thing!) But both of these approaches make two critical assumptions:
  • The baseline document(s) provide all the information required to extract a set of executable instructions for the conduct of a test.
  • The baseline is stable: changing requirements and designs make for a very painful test development and maintenance experience; most test script development takes place late in the development cycle.
In theory, a long term, document-intensive project with formal reviews, stages and sign-offs could deliver stable, accurate baselines providing all the information that system-level testers require. But few such projects deliver what their stakeholders want because stakeholder needs change over time and bureaucratic projects and processes cannot respond to change fast enough (or at all). So, in practice, neither assumption is safe. The full information required to construct an executable test script is not usually available until the system is actually delivered and testers can see how things really work. The baseline is rarely stable anyway: stakeholders learn more about the problem to be solved and the solution design evolves over time so ‘stability’, if ever achieved, is very late in arriving. The usual response is to bring the testers onto the project team at a very late stage.

What are the consequences?

  • The baselines are a ‘done deal’. Requirements are fixed and cannot be changed. They are not testable because no one has tried to use them to create tests. The most significant early deliverables of a project may not themselves have been tested.
  • Testers have little or no involvement in the requirements process. The defects that testers find in documents are ignored (“we’ve moved on – we’re not using that document anymore”).
  • There is insufficient detail in baselines to construct tests, so testers have to get the information they need from stakeholders, users and developers any which way they can. (Needless to say, there is insufficient detail to build the software at all! But developers at least get a head start on testers in this respect.) The knowledge obtained from these sources may conflict, causing even more problems for the tester.
  • The scripts fail in their stated objective: to provide sufficient information to delegate execution to an independent tester, outsourced organization or to an automated tool. These scripts need intelligence and varying degrees of system and business domain knowledge to be usable.
  • The baselines do not match the delivered system. Typically, the system design and implementation has evolved away from the fixed requirements. The requirements have not been maintained as users and developers focus on delivery. Developers rely on meetings, conversations and email messages for their knowledge.
  • When the time comes for test execution:
    1. The testers who created the scripts have to support the people running them (eliminating the supposed cost-savings of delegation or outsourcing).
    2. The testers run the test themselves (but they don’t need the scripts, so how much effort to create these test scripts was wasted?).
    3. The scripts are inaccurate, so paper copies are marked up and corrected retrospectively to cover the backs of management.
    4. Automated tests won’t run at all without adjustment. In fixing the scripts, are some legitimate test failures eliminated and lost? No one knows.
When testers arrive on a project late they are under-informed and misinformed. They are isolated in their own projects. Their sources of knowledge are unreliable: the baseline documents are not trustworthy. Sources of knowledge may be uncooperative: “the team is too busy to talk to you – go away!”

Does this sound familiar to you?

That’s the scene set. In the next essay, I’ll set out a different vision.

Tags: #Essaysontestdesign

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account

First published 20/09/2010

In the first essay in this series, I set out the challenges of system-level testing in environments where requirements documents define the business need and pre-scripted tests drive demonstrations that business needs are met. These challenges are not being met in most systems development projects.

In this second essay, I’d like to set out a vision for how organizations could increase confidence in requirements and the solutions they describe and to regard them as artifacts that are worth keeping and maintaining. Creating examples that supplement requirements will provide a better definition of the proposed solution for system developers and a source of knowledge for testing that aligns with the business need.

I need to provide some justification. The response of some to the challenge of capturing trusted requirements and managing change through a systems development project is to abandon the concept of pre-stated requirements entirely. The Agile approach focuses on the dynamics of development and the delivered system is ‘merely’ an outcome. This is a sensible approach in some projects. The customer is continuously informed by witnessing demonstrations or having hands-on access to the evolving system to experience its behaviour in use. By this means, they can steer the project towards an emergent solution. The customer is left with experience but no business definition of the solution – only the solution itself. That’s the deal.

But many projects that must work with (internally or externally) contracted requirements treat those requirements as a point of departure, to be left behind and to fade into corporate memory, rather than as a continuously available, dynamic vision of the destination. In effect, projects simply give up on having a vision at all and are driven by the bureaucratic needs to follow a process and the commercial imperative of delivering ‘on time’. It’s no surprise that so many projects fail.

In these projects, the customer is obliged to regard their customer test results as sufficient evidence that the system should be paid for and adopted. But the content of these tests is too often influenced by the solution itself. The content of these tests – at least at a business level could be defined much earlier. In fact, they could be derived from the requirements and ways the users intend to do business using the proposed system (i.e. their new or existing business processes). The essential content of examples is re-usable as tests of the business requirements and business processes from which they are derived. Demonstration by example IS testing. (One could call them logical tests, as compared with the physical tests of the delivered system).

The potential benefits of such an approach are huge. The requirements and processes to be used are tested by example. Customer confidence and trust in these requirements is increased. Tested, trusted requirements with a consistent and covering set of examples provide a far better specification to systems developers: concrete examples provide clarity, improve their understanding and increase their chances of success. Examples provide a trusted foundation for later system and acceptance testing so reusing the examples saves time. The level of late system failures can be expected to be lower. The focus of acceptance tests is more precise and stakeholders can have more confidence in their acceptance decisions. All in all, a much improved state of affairs.

Achieving this enlightened state requires an adjustment of attitudes and focus by customers, and systems development teams. I am using the Test Axioms (http://testaxioms.com) to steer this vision and here are the main tenets of it:

  1. Statements of Requirements, however captured, cannot be trusted if they are fixed and unchanging.
  2. Requirements are an ambiguous, incomplete definition of business needs. They must be supported by examples of the system in use.
  3. Requirements must be tested: examples are derived from the requirements and guided by the business process; they are used to challenge and confirm the thinking behind the requirements and processes.
  4. Requirements, processes and examples together provide a consistent definition of the business need to be addressed by the system supplier.
  5. The business-oriented approach is guided by the Stakeholder and Design Axioms.
  6. Examples are tests: like all tests, they have associated models, coverage, baselines, prioritisations and oracles.
  7. Business impact analyses during initial development and subsequent enhancement projects are informed by requirements and examples. Changes in need are reflected by changes in requirements and associated examples.
  8. Tests intended to demonstrate that business needs are met are derived from the examples that tested the requirements.
  9. Requirements and examples are maintained for the lifetime of the systems they define. The term ‘Live Specs’ has been used for this discipline.

If this is the vision, then some interesting questions (and challenges) arise:

  • Who creates examples to test requirements? Testers or business analysts?
  • Does this approach require a bureaucratic process? Is it limited to large structured projects?
  • What do examples look like? How formal are they?
  • What automated support is required for test management?
  • How does this approach fit with automated test execution?
  • What is the model for testing requirements? How do we measure coverage?
  • How do changing requirements and examples fit with contractual arrangements?
  • What is the requirements test process?
  • How do we make the change happen?
I’ll be discussing these and other questions in subsequent essays.

Tags: #Essaysontestdesign #examples

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account

First published 30/06/2011

Scale, extended timescales, logistics, geographic resource distribution, requirements/architectural/commercial complexity, demand for documented plans and evidence are the gestalt of larger systems development. “Large systems projects can be broken up into a number of more manageable smaller projects requiring less bureaucracy and paperwork” sounds good, bur few have succeeded. Iterative approaches are the obvious way to go, but not many corporates have the vision, the skills or the patience to operate that way. Even so, session-based/exploratory testing is a component of almost all test approaches.

The disadvantages of documentation are plain to see. But there are three apsects that concern us.

  1. Projects, like life never stand still. Documentation is never up to date or accurate and it's a pain to maintain – so it usually isn't
  2. Processes can be put in place to keep the requirements and all dependent documentation in perfect synchronisation. The delays caused by the required human interventions and translation processes undermine our best efforts.
  3. At the heart of projects are people. They can rely on processes and paper to save them and stop thinking. Or they can use their brains.

Number 3 is the killer of course. With the best will and processes and discipline in the world, all our sources of knowledge are fallible. It is our human ability and flexibility and dare I say it agility that allows us to build and test some pretty big stuff that seems to work.

Societal and corporate stupor (aka culture) conspire to make us less interested in tracking down the flaws in requirements, designs, code, builds and thinking. It is our exploratory instincts that rescue us.

Tags: #ALF

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account

First published 09/12/2009

A post on the Software Testing Club, Is Testing Last in Line? seems oh so familiar to complaints (if that that is they are) heard for as long as I've been in software (and I'm in my 29th year).

I think all of the responses to the blog are reasonable – but the underlying assumption in all (most) of them is that the tester is responsible for getting:

a) involved early b) involved heavily

Now of course, there are arguments that we have all had drummed into us since the 1970s that can be used to support both of these aims. But they tend to exclude the viewpoint of the stakeholder. (To me a stakeholder is anyone who is interested in the outcome of testing).

Why were we not automatically involved earlier, if it is so self-evident we should be?

Was no one interested in what we could tell them (given access to whatever products were being produced) at the time? Do stakeholders think we produce so little of interest?

Why don't we get the time, budget, people and resources to test as much as we could?

The same challenges apply. And the conclusions are uncomfortable.

By and large our stakeholders are not stupid. If it is self-evident to us that we should be involved earlier, more and more often, why isn't it obvious to them? Howling at the moon won't help us.

Surely we need to engage with stakeholders:

What exactly do they want from us? When? In what format? How frequently? How flexibly? How thoroughly? and so on.

Testing is 'last in line' with good reason. We dont engage, we don't articulate what we do well enough, we provide data, not information, we provide it late (self-fulfilling prophecy time here), we focus on bugs rather than business goals, we write documents when we need to deliver intelligence, we find bugs, when we need to provide evidence of success, we refuse to judge and give our stakeholders nothing when they need our support most etc. etc.

Every ten years or so, I get depressed when the next “big thing” arrives and it appears that, well... it's the same old same old with a different label. New technologies offer new opportunities I guess and after reading a text book or two I get on board.

But for as long as I can remember, testers have been complaining that testing is 'last in line' and they BLAME OTHERS. Surely it's time to look at how WE behave as testers? Surely, we should look at what we are doing wrong rather than blame others?

Tags: #prioritisation #lastinline

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account