Paul Gerrard

My experiences, opinions in the Test Engineering business. I am republishing/rewriting old blogs from time to time.

First published 06/11/2009

Getting the requirements ‘right’ for a system is a pre-requisite for a successful software development, but getting requirements right is also one of the most difficult things to achieve. There are many difficulties to overcome in articulating, documenting and validating requirements for computer systems. Inspections, walkthroughs and Prototyping are the techniques most often used to test or refine requirements. However, in many circumstances, formal inspections are viewed as too expensive, walkthroughs as ineffective and Prototyping as too haphazard and uncontrolled to be relied on.

Users may not have a clear idea of what they want, and are unable to express requirements in a rational, systematic way to analysts. Analysts may not have a good grasp of the business issues (which will strongly influence the final acceptance of the system) and tend to concentrate on issues relevant to the designers of the system instead. Users are asked to review and accept requirements documents as the basis for development and final acceptance, but they are often unable to relate the requirements to the system they actually envisage. As a consequence, it is usually a leap of faith for the users when they sign off a requirements document.

This paper presents a method for decomposing requirements into system behaviours which can be packaged for use in inspections, walkthroughs and requirements animations. Although not a formal method, it is suggested that by putting some formality into the packaging of requirements, the cost of formal inspections can be reduced, effective walkthroughs can be conducted and inexpensive animations of requirements can be developed.

Registered users can download the paper from the link below. If you aren't registered, you can register here.

Tags: #testingrequirements

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account

First published 21/02/2008

Teacher: Paul, make a sentence starting with the letter I.

Paul: I is...

Teacher: No, no, no, don't say "I is", you say "I am".

Paul: OK, I am the ninth letter of the alphabet.


This blog is my response to James Bach's comments on his blog to my postings on testing axioms. "Does a set of irrefutable test axioms exist?" and "The 12 Axioms of Testing". There are a lot of comments – all interesting – but many need a separate response. So, Read the following as if it were a conversation – it might make more sense.

PG:= Paul
Text in the standard font = James – not highlighted.



Here we go... James writes...

Paul Gerrard believes there are irrefutable testing axioms.

PG: I'm not sure I do or I don't. My previous blog asks could there be such axioms. This is just an interesting thought experiment. Interesting for me anyway. ;–)
This is not surprising, since all axioms are by definition irrefutable.

PG: Agreed – "irrefutable axioms" is tautological. I changed my blog title quickly – you probably got the first version, I didn't amend the other blog posting. Irrefutable is the main word in that title so I'll leave it as it is.
To call something an axiom is to say you will cover your ears and hum whenever someone calls that principle into question.

PG: It's an experiment, James. I'm listening and not humming.
An axiom is a fundamental assumption on which the rest of your reasoning will be based.

PG: Not all the time. If we encounter an 'exception' in daily life, and in our business we see exceptions all the damn time, we must challenge all such axioms. The axiom must explain the phenomena or be changed or abandoned. Over time, proposals gain credibility and evolve into axioms or are abandoned.
They are not universal axioms for our field.

PG: (Assume you mean "there are no") Now, that is the question I'm posing! I'm open to the possibility. I sense there's a good one.
Instead they are articles of Paul’s philosophy.

PG: Nope – I'm undecided. My philosophy, if I have one, is, "everything is up for grabs".
As such, I’m glad to see them. I wish more testing authors would put their cards on the table that way.

PG: Well thanks (thinks... damned with faint praise ;–) ).
I think what Paul means is that not that his axioms are irrefutable, but that they are necessary and sufficient as a basis for understanding what he considers to be good testing.

PG: Hmm, I hadn't quite thought of it like that but keep going. These aren't MY axioms any more than Newton's laws belonged to him – they were 'discovered'. It took me an hour to sketch them out – I've never used them in this format but I do suspect they have been in some implicit way, my guide. I hope they have been yours too. If not...
In other words, they define his school of software testing.

PG: WHAT! Pause while I get up off the floor haha. Deep breath, Paul. This is news to me, James!
They are the result of many choices Paul has made that he could have made differently. For instance, he could have treated testing as an activity rather than speaking of tests as artifacts. He went with the artifact option, which is why one of his axioms speaks of test sequencing. I don’t think in terms of test artifacts, primarily, so I don’t speak of sequencing tests, usually. Usually, I speak of chartering test sessions and focusing test attention.

PG: I didn't use the word artifact anywhere. I regard testing as an activity that produces Project Intelligence – information, knowledge, evidence, data – whatever you like – that has some value to the tester but more to the stakeholders of testing. We should think of our stakeholders before we commit to a test approach and not be dogmatic. (The stakeholder axiom). How can you not agree with that one? The sequencing axiom suggests you put most valuable/interesting/useful tests up front as you might not have time to do every test – you might be stopped at any time in fact. Test Charters and Sessions are right in line with at least half of the axioms. I do read stuff occasionally :–) Next question please!

No these aren't the result. They are thoughts, instincts even that I've had for many years and I've tried to articulate. I'm posing a question. Do all testers share some testing instincts? I won't be convinced that my proposed axioms are anywhere close until they've been tested and perfected through experience. I took some care to consider the 'school'.
Sometimes people complain that declaring a school of testing fragments the craft. But I think the craft is already fragmented, and we should explore and understand the various philosophies that are out there. Paul’s proposed axioms seem a pretty fair representation of what I sometimes call the Chapel Hill School, since the Chapel Hill Symposium in 1972 was the organizing moment for many of those ideas, perhaps all of them. The book Program Test Methods, by Bill Hetzel, was the first book dedicated to testing. It came out of that symposium.

PG: Hmm. This worries me a lot. I am not a 'school' thank-you very much. Too many schools push dogma, demand obedience to school rules and mark people for life. They put up barriers to entry and exit and require members to sing the same school song. No thanks. I'm not a school.

It reminds me of Groucho Marx. "I wouldn't want to join any club that would have me as a member."
The Chapel Hill School is usually called “traditional testing”, but it’s important to understand that this tradition was not well established before 1972. Jerry Weinberg’s writings on testing, in his authoritative 1961 textbook on programming, presented a more flexible view. I think the Chapel Hill school has not achieved its vision, it was largely in dissatisfaction with it that the Context-Driven school was created.

PG: In my questioning post, I used 'old school' and 'new school' just to label one obvious choice – pre-meditated v contemporaneous design and execution to illustrate that axioms should support or allow both – as both are appropriate in different contexts. I could have used school v no-school or structured v ad-hoc or ... well anything you like. This is a distraction.

But I am confused. You call the CH symposium a school and label that "traditional". What did the symposium of 1972 call themselves? Traditional? A school? I'm sure they didn't wake up the day after thinking "we are a school" and "we are traditional". How do those labels help the discussion? In this context, I can't figure out whether 'school' is a good thing or bad. I only know one group who call themselves a school. I think 'brand' is a better label.
One of his axioms is “5. The Coverage Axiom: You must have a mechanism to define a target for the quantity of testing, measure progress towards that goal and assess the thoroughness in a quantifiable way.” This is not an axiom for me. I rarely quantify coverage. I think quantification that is not grounded in measurement theory is no better than using numerology or star signs to run your projects. I generally use narrative and qualitative assessment, instead.

PG: Good point. the words quantity and quantifiable imply numeric measurement – that wasn't my intention. Do you have a form of words I should use that would encompass quantitive and qualitative assessment? I think I could suggest "You must have a means of evaluating narratively, qualitatively or quantitatively the testing you plan to do or have done". When someone asks, how much testing do you plan to do, have done or have left to do, I think we should be able to provide answers. "I don't know" is not a good answer – if you want to stay hired.
For you context-driven hounds out there

PG: Sir, Yes Sir! ;–)
practice your art by picking one of his axioms and showing how it is possible to have good testing, in some context, while rejecting that principle. Post your analysis as a comment to this blog, if you want.

PG: Yes please!
In any social activity (as opposed to a mathematical or physical system), any attempt to say “this is what it must be” boils down to a question of values or definitions. The Context-Driven community declared our values with our seven principles. But we don’t call our principles irrefutable. We simply say here is one school of thought, and we like it better than any other, for the moment.

PG: I don't think I'm saying "this is what it must be" at all. What is "it", what is "must be"? I'm asking testers to consider the proposal and ask whether they agree if it has some value as a guide to choosing their actions. I'm not particularly religious but I think "murder is wrong". The fact that I don't use the ten commandments from day to day does not mean that I don't see value in them as a set of guiding principles for Christians. Every religion has their own set of principles, but I don't think many would argue murder is acceptable. So even religions are able to find some common ground. In this analogy, school=religion. Why can't we find common ground between schools of thought?

I'm extremely happy to amend, remove or add to the axioms as folk comment. Either all my suggestions will be completely shot down or some might be left standing. I'm up for trying. I firmly believe that there are some things all testers could agree on no matter how abstract. Are they axioms? Are they motherhood and apple pie? Let's find out. These abstractions could have some value other than just as debating points. But let's have that debate.

By the way – my only condition in all this is you use the blog the proposed axioms appear on. If you want to defend the proposed axioms – be my guest.

Thanks for giving this some thought – I appreciate it.


Tags: #School'sOut!

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account

First published 07/12/2011

Its been interesting to me to watch over the last 10 or maybe 15 years the debate over whether exploratory or scripted testing is more effective. There's no doubt that one can explore more of a product in the time it takes for someone to follow a script. But then again – how much time exploratory testers lose spent bumbling around lost, aimlessly going over the same ground many times, hitting dead ends (because they have little or no domain or product knowledge to start with). Compare that with a tester who has lived with the product requirements as they have evolved over time. They may or may not be blinkered, but they are better informed – sort of.

I'm not going to decry the value of exploration or planned tests – both have great value. But I reckon people who think exploration is better than scripted under all circumstances have lost sight of a thing or two. And that phrase 'lost sight of a thing or two' is significant.

I'm reading Joseph T. Hallinan's book, “Why We Make Mistakes”. Very early on, in the first chapter no less Hallinan suggests, “we're built to quit”. It makes sense. So we are.

When humans are looking for something, smuggled explosives, tumors in x-rays, bugs in software – humans are adept at spotting what they look for – if, and it's a big if, these things are common – in which case they are pretty effective – spotting what they look for most of the time.

But, what if what they seek is relatively rare? Humans are predisposed to just give up the search prematurely. It's evolution stupid! Looking for, and not finding, food in one place just isn't sensible after a while. you need to move on.

Hallinan quotes (among others) the cases of people who look for PA-10 rifles in luggage in airports and tumours in xrays. In these cases, people look for things that rarely exist. In the case of radiologists, mammograms reveal tumours only 0.3 percent of the time. 99.7 percent of the time the searcher will not find what they look for.

In the case of guns or explosives in luggage the occurrence is rarer. In 2004, according to the thegunsource.com, 650 million passengers travvelled in the US by air. But only 598 firearms were found – about one in a million occurences.

Occupations that seek to find things that are rare have considerable error rates. The miss rate for radiogists looking for cancers is around 30%. In one study at the world famous Mayo clinic, 90% of the tumours missed by radiologists were visible in previous x-rays.

In 2008, from the UK I travelled to the US, to Holland and Ireland. On my third trip, returning from Ireland with the same rucksdack on my back at the security check at Dublin airport (i.e. my sixth flight), when going through security, I was called to one side by a security officer. A lock-knife with a 4.5 inch blade was found in my rucksack. Horrified, when presented with the article I asked could it please be disposed of! It was mine, but in the bag by mistake – and had been there for six months unnoticed, by me and by five airport security scans. This was the sixth flight with the offending article in the bag. Five previous scans at airports terminals had failed to detect a quite heavy metal object – pointed and a potentially dangerous weapon. How could that happen? Go figure.

Back to software. Anyone can find bugs in crappy software. Its like walking in bare feet in a room full of loaded mouse traps. But if you are testing software of high quality, it's harder to find bugs. It may be you give up before you have given yourself time to find the really (or not so) subtle ones.

Would a script help? I don't know. It might help because in principle, you have to follow it. But it might make you even more bored. All testers get bored/hungry/lazy/tired and are more or less incompetent or uninformed – you might give up before you've given yourself time to find anything significant. Our methods, such as they are, don't help much with this problem. Exploratory testing can be just as draining/boring as scripted.

I want people to test well. It seems to me that the need to test well increases with the criticality and quality of software, and motivation to test aligns pretty closely. Is exploration or scripted testing of very high quality software more effective? I'm not sure we'll ever know until someone does a proper experiment (and I don't mean testing a 2000 line of code toy program in a website or nuclear missile).

I do know that if you are testing high quality code and just before release it usually is of high quality, then you have to have your eyes open and your brain switched on. Both of em.

Tags: #exploratorytesting #error #scriptedtesting

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account

First published 11/10/2011

Anne-Marie Charrett wrote a blog post that I commented on extensively. I've reproduced the comment here:

“Some to agree with here, and plenty to disagree with too...

  1. Regression testing isn't about finding bugs the same way as one might test new software to detect bugs (testing actually does not detect bugs, it exposes failure. Whatever.) It is about detecting unwanted changes in functionality caused by a change to software or its environment. Good regression tests are not necessarily 'good functional tests'. They are tests that will flag up changes in behaviour – some changes will be acceptable, some won't. A set of tests that purely achieve 80% branch coverage will probably be adequate to demonstrate functional equivalence of two versions of software with a high level of confidence – economically. They might be lousy functional tests “to detect bugs”. But that's OK – 'bug detection' is a different objective.

  2. Regression Testing is one of four anti-regression approaches. Impact analysis from a technical and business point of view are the two preventative approaches. Static code analysis is a rarely used regression detection approach. Fourthly...and finally ... regression testing is what most organisations attempt to do. It seems to be the 'easiest option' and 'least disruptive to the developers'. (Except that it isn't easy and regression bugs are an embarrassing pain for developers). The point is one can't consider regression testing in isolation. It is one of four weapons in our armoury (although the technical approaches require tools). It is also over relied-on and done badly (see 1 above and 3 below).

  3. If Regression testing is about demonstrating functional equivalence (or not), then who should do it? The answer is clear. Developers introduce the changes. They understand or should understand the potential impact of planned changes on the code base before they proceed. Demonstrating functional equivalence is a purely technical activity. Call it checking if you must. Tools can do it very effectively and efficiently if the tests are well directed (80% branch coverage is a rule of thumb). Demonstrating functional equivalence is a purely technical activity that should be done by technicians.

Of course, what happens mostly is that developers are unable to perform accurate technical impact analyses and they don't unit test well so they have no tests and certainly nothing automated. They may not be interested in and/or paid to do testing. So the poor old system or acceptance testers working purely from the user interface are obliged to give it their best shot. Of course, they try and re-use their documented tests or their exploratory nous to create good ones. And fail badly. Not only are tests driven from the UI point of view unlikely to cover the software that might be affected, the testers are generally uninformed of the potential impact of software changes so have no steer to choose good tests in the first place. By and large, they aren't technical and aren't privy to the musings of the developers, before they perform the code changes so they are pretty much in the dark.

So UI driven manual or automated regression testing is usually of low value (but high expense) when intended to demonstrate functional equivalence. That is not to say that UI driven testing has no value. Far from it. It is central to assessing the business impact of changes. Unwanted side effects may not be bugs in code. Unwanted side-effects are a natural outcome of the software changes requested by users. A common unwanted effect here is for example, a change in configuration in an ERP system. The users may not get what they wanted from the 'simple change'. Ill-judged configuration changes in ERP systems designed to perform straight-through processing can have catastrophic effects. I know of one example that caused 75 man-years of manual data clean-up effort. The software worked perfectly – there was no bug. The business using the software did not understand the impact of configuration changes.

Last year I wrote four short papers on Anti-Regression Approaches (including regression testing) and I expand on the points above. You can see them here: http://gerrardconsulting.com/index.php?q=node/479

Tags: #regressiontesting #anti-regression

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account

First published 06/02/2010

This was one of two presentations at the fourth Test Management Summit in London on 27 January 2010.

I don't normally preach about test auotmation as, quite frankly, I find the subject boring. The last time I talked about automation was maybe ten years ago. This was the most popular topic in the session popularity survey we ran a few days before the event and it was very well attended.

The powerpoint slides were written on the morning of the event while other sessions weere taking place. It would seem that a lot of deep-seated frustrations with regression testing and automation came to the fore. The session itself became something of a personal rant and caused quite a stir.

The slides have been amended a little to include some of the ad-hoc sentiments I expressed in the session and also to clarify some of the messages that came 'from the heart'.

I hope you find it interesting and/or useful.

Regression Testing – What to Automate and How

Tags: #testautomation #automation #regressiontesting

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account

First published 09/02/2012

Atlassian kindly asked me to write a blog post on testing for their website. I wrote a longer, two-part article that appears here and here. I have combined the two articles into a single blog post here.

Testing is Dead?

Recently, there has been a spate of predictions of doom and gloom in our business. Conference talks have had titles such as ‘Test is Dead’ and ‘Death to the Testing Phase’. ‘Testing has contributed little to quality improvement in the last ten years’, and even being a tester is a ‘bad thing’ – are all keynote themes that circulated at conferences, blogs and YouTube in late 2011.

My own company has been predicting the demise of the ‘plain old functional tester’ for years and we’ve predicted both good and bad outcomes of the technology and economic change that is going on right now. In July I posted a blog, ‘Testing is in a Mess’ where I suggested that there's complacency, self-delusion and over capacity in the testing business; there is too little agreement about what testing is, what it’s for or how it should be done.

There are some significant forces at play in the IT industry and I think the testing community, at least the testers in what might be called the more developed ‘testing nations’ will be coming under extreme pressure.

The Forces and Factors that will Squeeze Testers

The growth of testing as a career

Over the last twenty years or so there has been a dramatic growth in the number of people who test and call themselves testers and test managers. When I started in the testing business in 1992 in the UK, few people called themselves a tester, let alone thought of themselves as having a career in testing. Now, there are tens of thousands in the UK alone. Twenty years ago, there were perhaps five companies offering testing services. There must be ten times that number now and there are hundreds, perhaps thousands of freelancers who specialise and make a living in testing. Beyond this of course, the large system integration and outsourcing firms have significant testing practices with hundreds or even thousands of staff, many offshored of course.

It’s not that more testing happens. I think it’s because the people who do it are now recruited into teams, having managers who plan, resource and control sizable budgets in software projects to perform project test stages. Many ‘career testers’ have never done anything else.

Lack of advance in the discipline

The sources and sheer volume of testing knowledge have exploded. There are countless papers, articles, blogs and books available now, and there are many conferences, forums, meet-ups and training courses available too. But, even though the volume of information is huge, most of it is not new. As a frequent conference goer over 20 years, it depresses me that the innovation one sees in conferences, for example, tends to be focused on the testers’ struggle to keep pace with and test new technologies rather than insights and inventions that move the tester’s discipline forward.

Nowadays, much more attention is paid to the management of testing, testers and stakeholders expectations and decision making. But if you consider the argument that perhaps test management is a non-discipline, that is there is no such thing as test management, there’s just management, and you take the management away from test management – what’s left? Mostly challenges in test logistics – or just logistics – and that’s just another management discipline?

Advances(?) in Automation

What about the fantastic advances in automation? Let’s look at the two biggest types of test automation.

Test execution robots are still, well, just robots. The advances in these have traced the increased complexity of the products used to build and deliver functionality. From green-screen to client/server to GUI to Web, to SOA, the test automation engineer of 1970 (once they got over the shock of reincarnation) would quickly recognise the patterns of test automation used today. Of course, the automation frameworks are helping to make test automation somewhat more productive, but one could argue that people have been building their own custom frameworks for years and years and they should have been mainstream long ago.

The test management tools that are out there are fantastic. Integrated test case management, scheduling, logging and incident management and reporting. Except that the fundamental purpose of these tools is basic record-keeping and collaboration. Big deal. The number of companies who continue to use Excel as their prime test management tools shows just how limited the test management tools are in what they do. Most organisations get away without test management products altogether because these products support the clerical test activities and logistics, but do little or nothing to support the intellectual effort of testers.

The Emergence/Dominance of Certification

The test certification schemes have gone global it seems. Dorothy Graham and I had an idea for a ‘Foundation’ certification in 1997 and we presented a one page syllabus proposal to an ad-hoc meeting at the Star WEST conference in San Jose to gauge interest. There wasn’t much. So we came back to the UK, engaged with ISEB (not part of BCS in those days) and I became the founding Chair of the initial ISEB Testing Board. About ten or so UK folk kicked off the development of the Foundation scheme which had its first outing in late 1998.

As Dorothy says on her blog (http://dorothygraham.blogspot.com/2011/02/part-2-bit-of-history-about-istqb.html), the Foundation met its main objective of “removing the bottom layer of ignorance” about software testing. Fourteen years and 150,000 certificate awards later, it does the same. Except that for many people it’s all they need (and may ever need) to get a job in the industry.

The Agile Juggernaut

Agile is here to stay. Increasingly, developers seem to take test, Test-Driven and Behaviour-Driven Development and Specification by Example more seriously. Continuous Integration and Delivery is the heartbeat, the test, life-support and early warning system. The demands for better testing in development are being met. A growing number of developers have known no other way.

It seems likely that if this trend continues, we’ll get better, stable software sooner and much of the checking done late by system testers will not be required. Will this reduce the need for system testers? You bet.

Some Agile projects don’t use testers – the testers perform a ‘test assurance’ role instead. The demand for unskilled testers reduces and the need for a smaller number of smarter testers with an involvement spread over multiple projects increases. Again – fewer testers are required.

What is the Squeeze?

The forces above are squeezing testers from the ‘low-value’ unskilled, downstream role to upstream, business-savvy, workflow-oriented, UX (user experience)-aware testing specialists with new tools. Developers are absorbing a lot of checking that is automated. Some business analysts are taking their chance and absorbing test disciplines into analysis and are taking over the acceptance process too.

If a 3 day certification is all you need to be a professional tester, no wonder employers think testing is a commodity, so will outsource it when they can.

Stakeholders know that avoiding defects is better than finding them. Old-style testing is effective but happens at the end. Stakeholders will say, “Let’s take requirements more seriously; force developers to test and outsource the paperwork”.

Smart testers need to understand they are in the information business, that testing is being re-distributed in projects and if they are not alert, agile even, they will be squeezed out. Needless to say, the under-skilled testers, relying on clerical skills to get by will be squeezed out.

A Methodological Shift

There seems to be a methodological shift from staged, structured projects to iterative and Agile and now, towards ‘continuous delivery’. Just as companies seem to be coming to terms with Agile – it’s all going to change again. They are now being invited to consider continuous ‘Specification by Example’ approaches. Specification by example promotes a continual process of specification, exampling, test-first, and continuous integration.

But where does the tester fit in this environment?

The Industry Changes its Mind – Again

So far, I’ve suggested there were four forces that were pushing testers out of the door of software projects (and into the real world, perhaps). Now, I want to highlight the industry changes that seem to be on the way, that impact on development and delivery and hence on testing and testers. After the negative push, here’s the pull. These changes offer new opportunities and improve testers’ prospects.

Recent reports (IBM’s ‘The Essential CIO’ 2011 study and Forrester’s ‘Top 10 Technology Trends to Watch’) put Business Intelligence, adoption of cloud platforms and mobile computing as the top three areas for change and increased business value (whatever that means).

Once more, the industry is in upheaval and is set for a period of dramatic change. I will focus on adoption of the cloud for platforms in general and for Software as a Service (SaaS) in particular and the stampede towards mobile computing. I’m going to talk about internet- (not just web-) based systems rather than high integrity or embedded systems, of course.

The obvious reason for moving to the cloud for Infrastructure as a Service (IaaS) and regardless of the subtleties of capex v opex costs, the cost advantage of moving to cloud-based platforms is clear. “Some of this advantage is due to purchasing power through volume, some through more efficient management practices, and, dare one say it, because these businesses are managed as profitable enterprises with a strong attention to cost” (http://www.cio.com/article/484429/Capex_vs._Opex_Most_People_Miss_the_Point_About_Cloud_Economics). So, it looks like it’s going to happen.

Moving towards IaaS will save some money. The IT Director can glory in the permanent cost savings for a year – and then what? Business will want to take advantage of the flexibility that the move to the cloud offers.

The drift from desktop to laptop to mobile devices gathers pace. Mobile devices coupled with cloud-based services have been called the ‘App Internet’. It seems that many websites will cease to be and might be replaced by dedicated low-cost or free Apps that provide simple user interfaces. New businesses with new business models focusing on mobile are springing up all the time. These businesses are agile by nature and Agile by method. The pull of the App internet and Agile approaches is irresistible.

The Move to SaaS and Mobile (App) Internet

I’m not the biggest fan of blue sky forecasters, and I’m never entirely sure how they build their forecasts with an accuracy of more than one significant digit, but according to Forrester’s report “Sizing the Cloud”, the market for SaaS will grow from $21bn in 2011 to $93bn in 2016 and represent 26% of all packaged software. (http://forrester.com/rb/Research/sizing_cloud/q/id/58161/t/2).

Now 26% of all packaged software doesn’t sound so dramatic, but wait a minute. To re-architect an installed base of software and create new applications from scratch to make that percentage will be a monumental effort. A lot of this software will be used by corporates who have systems spanning the (probably private) cloud and legacy systems and the challenges of integration, security, performance and reliability will be daunting.

The Impact on Development, Delivery and Testing

Much of the software development activity in the next five years or so will be driven by the need for system users and service vendors to move to new business models based on new architectures. One reason SaaS is attractive to software vendors is that the marketing channel and the service channel are virtually same and the route to market is so simple that tiny boutique software shops can compete on the same playing field as the huge ISVs. The ISVs need to move pretty darned quick not to be left with expensive, inflexible, unmarketable on-premise products so are scrambling to make their products cloud-ready. Expect there to be some consolidation in some market sectors.

SaaS works as an enabler for very rapid deployment of new functionality and deployment onto a range of devices. A bright idea in marketing being deployed as new functionality in the afternoon seems to be feasible and some companies seem to be succeeding with ‘continuous delivery’. This is the promise of SaaS.

Many small companies (and switched-on business units in large companies) have worked with continuous delivery for years however. The emergence of the cloud and SaaS and of maturing Agile, specification by example, continuous integration, automated testing and continuous delivery methods means that many more companies can take this approach.

What Does this Mean for Software Practitioners?

Businesses like Amazon and Google and the like have operated a continuous delivery model for years. The ‘Testing is Dead’ meme can be traced to an Alberto Savoia talk at Google’s GTAC conference (http://www.youtube.com/watch?v=X1jWe5rOu3g). Developers who test (with tools) ship code to thousands of internal users who ‘test’ and then the software goes live (as a Beta, often). Some products take off; some, like Wave, don’t. The focus of Alberto’s talk is that software development and testing is often about testing ideas in the market.

Google may have a unique approach, I don’t know. But most organisations will have to come to terms with the new architectures and a more streamlined approach to development. The push and pull of these forces are forcing a rethink of how software available through the internet is created, delivered and managed. The impacts on testing are significant. Perhaps testing and the role of testers can at last can mature to what they should be?

Some Predictions

Well, after the whirlwind tour of what hot and what’s not in the testing game, what exactly is going to happen? People like predictions so I’ve consulted my magic bones and here are my ten predictions for the next five years. As predictions go, some are quite dramatic. But in some companies in some contexts, these predictions will come true. I’m just offering some food for through.

Our vision, captured in our Pocketbook (http://businessstorymethod.com) is that requirements will be captured as epic stories, and implementable stories will example and test those requirements to become ‘trusted’, with a behaviour-driven development approach and an emphasis on fully and always automated checking. It seems to us that this approach could span (and satisfy) the purist Agilists but allow many more companies used to structured approaches to adopt Agile methods whilst satisfying their need to have up-front requirements. Here are my predictions:

  1. 50% of in-house testers will be reassigned, possibly let go. The industry is over staffed by unqualified testers using unsystematic, manual methods. Lay them off and/or replace them with cheaper resource is an easy call for a CIO to make.
  2. Business test planning will become part of up-front analysis. It seems obvious, but why for all these years has one team captured requirements and another team planned the test to demonstrate they are met. Make one (possibly hybrid) group responsible.
  3. Specification by example will become the new buzzword on people’s cv. For no other reason that SBE incorporates so many buzzwordy Agile practices – Test-First, Test-Driven, Behaviour-Driven, Acceptance-Test Driven, Story-Testing, Agile Acceptance testing) – it will be attractive to employers and practitioners. With care, it might actually work too.
  4. Developers will adopt behaviour-driven-development and new tools. The promise of test code being automatically generated and executed compared to writing one’s own tests is so attractive to developers they’ll try it – and like it. Who writes the tests though?
  5. Some system tests and most acceptance tests will be business-model driven. If Business Stories, with scenarios to example the functionality, supported by models of user workflows are created by business analysts, those stories can drive both developer tests and end to end system and acceptance tests. So why not?
  6. Business models plus stories will increasingly become ‘contractual’. For too long, suppliers have used the wriggle-room of sloppy requirements to excuse their poor performance and high charges for late, inevitable changes to specification. Customers will write more focused compact requirements, validated and illustrated with concrete examples to improve the target and reduce the room for error. Contract plus requirements plus stories and examples will provide the ‘trusted specification’.
  7. System tests will be generated from stories or be outsourced. Business story scenarios provide the basic blocks for system test cases. Test detailing to create automated or manual test procedures is a mechanical activity that can be outsourced.
  8. Manual scripted system test execution will be outsourced (in the cloud). The cloud is here. Testers are everywhere. At some point, customers will lose their inhibition and take advantage of the cloud+crowd. So, plain old scripted functional testers are under threat. What about those folk who focus more on exploratory testing? Well, I think they are under threat too. If most exploration is done in the cloud, then why not give some testing to the crowd too?
  9. 50% of acceptance tests will be automated in a CI environment for all time. Acceptance moves from a cumbersome, large manual test at the end to a front-end requirements validation exercise with stories plus automated execution of those stories. Some manual tests, overseen by business analysts will always remain.
  10. Tools that manage requirements, stories, workflows, prototyping, behaviour-driven development, system and acceptance testing emerge.
Where do testers fit? You will have to pick your way through the changes above to find your niche. Needless to say you will need more than basic ‘certification level’ skills. Expect to move towards a specialism or be reassigned and/or outsourced. Business analysis, test automation, test assurance, non-functional testing or test leadership beckon.

Whither the Test Manager?

You are test manager or a test lead now. Where will you be in five years? In six months? It seems to me there are five broad choices for you to take (other than getting out of testing and IT altogether).
  1. Providing testing and assurance skills to business: moving up the food chain towards your stakeholders, your role could be to provide advice to business leaders wishing to take control of their IT projects. As an independent agent, you understand business concerns and communicate them to projects. You advise and cajole project leadership, review their performance and achievement and interpret outputs and advise your stakeholders.
  2. Managing Requirements knowledge: In this role, you take control of the knowledge required to define and build systems. Your critical skills demand clarity and precision in requirements and the examples that illustrate features in use. You help business and developers to decide when requirements can be trusted to the degree that software can reasonably be built and tested. You manage the requirements and glossary and dictionary of usage of business concepts and data items. You provide a business impact analysis service.
  3. Testmaster – Providing an assurance function to teams, projects and stakeholders: A similar role to 1 above – but for more Agile-oriented environments. You are a specialist test and assurance practitioner that keeps Agile projects honest. You work closely with on-site customers and product owners. You help projects to recognise and react to risk, coach and mentor the team and manage their testing activities and maybe do some testing yourself.
  4. Managing the information flow to/from the CI process: in a Specification by Example environment, if requirements are validated with business stories and these stories are used directly to generate automated tests which are run on a CI environment, the information flows between analysts, developers, testers and the CI system is critical. You define and oversee the processes used to manage the information flow between these key groups and the CI system that provides the control mechanism for change, testing and delivery.
  5. Managing outsourced/offshore teams: In this case, you relinquish your onsite test team and manage the transfer of work to an outsourced or offshore supplier. You are expert in the management of information flow to/from your software and testing suppliers. You manage the relationship with the outsourced test team, monitor their performance and assure the outputs and analyses from them.

Close

The recent history and the current state of the testing business, the pressures that drive the testers out of testing and the pull of testing into development and analysis will force a dramatic re-distribution of test activity in some or perhaps most organisations.

But don’t forget, these pressures on testing and predictions are generalisations based on personal experiences and views. Consider these ideas and think about them – your job might depend on it. Use them at your own risk.

Tags: #testingisdead #redistributionoftesting

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account

First published 10/12/2010

I am proud and honoured to have received the Eurostar European Testing Excellence award for 2010. I’m particularly grateful to Geoff Thompson who proposed me, Graham Thomas who encouraged Geoff to put the effort in and my business partner Susan Windsor for putting up with me. Of course, I would like to thank the friends, colleagues and customers who provided references for the submission. Needless to say, I also owe a huge debt to my wife Julia and family.

To be singled out for the award is very special but I want to emphasise that I am part of a large community of testers. It is an honour to be associated with such a group of people in the UK, Europe and worldwide who are so generous with their time to challenge and to share their knowledge. In this respect, Testers seem to me to be unique in the IT industry.

Thank-you all once again.

Tags: #Eurostar #testingexcellenceaward #awards

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account

First published 06/11/2009

Ten years ago, the Internet was a relatively small, closed networkused by defence and academic organisations in the US. When Mosaic, a graphical Web browserappeared in 1994 and became widely-available, the explosive popularity of the Net began,and continues today. In August 1999 the number of people connected to the net was 195m andthis is expected to be 250m by the end of the millennium. In the UK around 12m people or20% of the population of all ages will have access when the new Millennium dawns. If youhave a PC and modem the cost of connection is the cost of a local telephone call.

Because the on-line market is world-wide, unrestricted, vast (andstill growing), the emergence of electronic commerce as a new way of conducting businessgathers momentum. E-commerce has been described as the last gold-rush of the millennium.Since the cost of entry into the e-commerce marketplace is so low, and the potentialrewards so high, business to business and business-consumer vendors are scrambling tocapture maximum market share.

Although the cost of entry into the market is low, the risk offailure in the marketplace is potentially very high. The web sites of traditional vendorswith strong brand names have not automatically succeeded, and there have been some notablefailures. Many of the largest e-commerce sites were completely unknown start-up companiesthree years ago. E-commerce systems have massive potential, but with new technology comenew risks, and testing must change to meet the needs of the new paradigm. What are therisks of the e-commerce systems?

The typical e-commerce system is a three-tiered client/serverenvironment, database (often legacy system) servers working with application or businessservers, fronted by web servers. Given this basic structure, many other special purposeservers may also be involved: firewalls, distributed object, transaction, authentication,credit card verification and payment servers are often part of the architecture. The Webis the vehicle by which the promise of client/server will finally be realised.

Many of the risks faced by e-commerce developers are the same as forclient/server, but there are important differences. Firstly, the pace of development onthe web is incredibly quick. 'Web-time' describes the hustle that is required tocreate and maintain momentum. Few systems are documented adequately. The time from a newidea and deployment onto the Web may only be a few weeks. Enhancements may be thought ofin the morning to be deployed in the afternoon. Some sites, for example a TV news channelsite, must provide constantly changing, but up to date content 24 hours a day, 365 days ayear.

You have no control over the users who visit and use your site. Yourusers may access your site with any one of 35 different browsers or other web devices(will your site work with them all?). There is no limit to how many people can access yoursite at once (will your site crash under the load?). Users will not be trained, many maynot speak your language, some may be disabled, some blind (will they find your siteusable, fast, useful?). Some of your users will be crooks (can your site withstand ahacker's attack?). Some of your users may be under-age, (are you selling alcohol tominors?) Whether you are in retail or not, one way of looking at the way people use youre-commerce site is to compare it with a traditional retail store.

Anyone can visit your store, but if your doors are shut (the site isdown); if the queue to pay is too long; if the customer cannot pay the way they want to;if your price list is incomplete, out of date, or impossible to use, your customers willgo elsewhere. E-commerce site designers must design to provide their users with the mostrelaxed, efficient and effective web-experience possible. E-commerce site testers, mustget inside the heads of users and create test scenarios that match reality.

What are the imperatives for e-commerce testers? To adopt a rapidresponse attitude. To work closely with marketeers, designers, programmers and of coursereal users to understand both user needs and the technical risks to be addressed intesting. To have a flexible test process having perhaps 20 different test types that covereach of the most likely problems. To automate as much testing as possible.

Whether home-grown or proprietary, the essentials tools are testdata and transaction design; test execution using the programmer and user interfaces;incident management and control to ensure the right problems get fixed in the right order.Additional tools to validate HTML and links, measure download time and generate loads areall necessary. To keep pace with development, wholly manual testing is no longer anoption. The range of tools is large required but most are now available.

Paul Gerrard, 12 September 1999.

Tags: #e-commerce #Risks

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account

First published 30/03/2007

The raw materials of real engineering: steel, concrete, water, air, soil, electomagnetic waves, electricity, obey the laws of physics.

Software of course, does not. Engineering is primarily about meeting trivial functional requirements and complex technical requirements using materials that obey the laws of physics.

I was asked recently whether the definitions – Functional and Non-Functional – are useful.

My conclusion was – at the least, they aren't helpful. At worst debilitating. There's probably half a dozen other themes in the initial statement but I'll stick to this one.

There is a simple way of looking at F v NF requirements. FRs define what the system must do. NFRs define HOW that system delivers that functionality. e.g. is it secure, responsive, usable, etc.

To call anything 'not something else' can never be intuitively correct I would suggest if you need that definition to understand the nature of the concept in hand. Its a different dimension, perhaps. Non-functional means – not working doesn't it?

Imagine calling something long, “not heavy”. It's the same idea and it's not helpful. It's not heavy because you are describing a different attribute.

So, to understand the nature of Non-Functional Requirements, it's generally easier to call them technical requirements and have done with it.

Some TRs, are functional of course, and that's another confusion. Access control to data and function is a what, not a how. Security vulnerabilities are, in effect functional defects. The system does something we would rather it didn't. Pen testing is functional testing. Security invulnerability is a functional requirement – it's just that most folk are overcome by the potential variety of threats. Pen tests use a lot of automation using specialised tools. But they are specialised, non non functional.

These are functional requirements just like the stuff the users actually want. Installability, documentation, procedure, maintainability are ALL functional requirements and functional tested.

The other confusion is that functional behaviour is Boolean. It works or it doesn't work. Of course, you can count the number of trues and falses. But that is meaningless. 875 out of 1000 test conditions pass. It could be expressed as a percentage, what exactly does that mean? Not much, until you look into the detail of the requirements themselves. One single condition could be several orders of magnitude more important than another. Apples and oranges? Forget it. Grapes and vineyards!

Technical behaviour is usually measurable on a linear scale. Performance and reliability for example (if you have enough empirical data to be significant) are measured numerically. (OK you can say meets v doesn't meet requrements is a Boolean but you know what I mean).

Which brings me to the point.

In proper engineering, say civil/structural... (And betraying a prejudice, structural is engineering, civil includes all sorts of stuff that isn't...)

In structural engineering, for example, the Functional requirements are very straight forward. With a bridge – say the Forth Bridge or the Golden Gate – built a long long time ago – the Functional requirements are trivial. “Support two railway lines/four lanes of traffic with travelling in both directions. (And a foot bridge for maintenance)”.

The Technical requirements are much more complex. 100% of the engineering discipline is focused on techical requirements. Masses of steel, cross sections, moments, stresses and strains. Everything is underpinned by the science of materials (which are extensively tested in laboratories, with safety factors applied), and tabulated in blue or green books full of tabulated cross sectional areas, beam lengths, cement/water ratios and so on. All these properties are calculated based on thousands of laboratory experiements, with statistical technques applied to come up with factors of safety. Most dams, for example, are not 100% safe for all time. they are typically designed to withstand 1 in 200 year floods. And they fail safely, because one guy in the design office is asked to explore the consequences of failure – which in the main are predictable.

Software does not obey the laws of physics.

Software development is primarily about meeting immensely complex functional requirements and relatively simple technical requirements using some ethereal stuff called software that very definitely does not obey laws at all. (Name one? Please?)

Functional testing is easy, meeting functional requirements is not. Technical testing is also easy, meeting technical requirements is (comparatively) easy.

This post isn't about “non-functional requirements versus functional rerqurements.” It's an argument saying ALL requirements are hard to articulate and meet. So there.

Tags: #ALF

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account

First published 20/09/2010

In the first essay in this series, I set out the challenges of system-level testing in environments where requirements documents define the business need and pre-scripted tests drive demonstrations that business needs are met. These challenges are not being met in most systems development projects.

In this second essay, I’d like to set out a vision for how organizations could increase confidence in requirements and the solutions they describe and to regard them as artifacts that are worth keeping and maintaining. Creating examples that supplement requirements will provide a better definition of the proposed solution for system developers and a source of knowledge for testing that aligns with the business need.

I need to provide some justification. The response of some to the challenge of capturing trusted requirements and managing change through a systems development project is to abandon the concept of pre-stated requirements entirely. The Agile approach focuses on the dynamics of development and the delivered system is ‘merely’ an outcome. This is a sensible approach in some projects. The customer is continuously informed by witnessing demonstrations or having hands-on access to the evolving system to experience its behaviour in use. By this means, they can steer the project towards an emergent solution. The customer is left with experience but no business definition of the solution – only the solution itself. That’s the deal.

But many projects that must work with (internally or externally) contracted requirements treat those requirements as a point of departure, to be left behind and to fade into corporate memory, rather than as a continuously available, dynamic vision of the destination. In effect, projects simply give up on having a vision at all and are driven by the bureaucratic needs to follow a process and the commercial imperative of delivering ‘on time’. It’s no surprise that so many projects fail.

In these projects, the customer is obliged to regard their customer test results as sufficient evidence that the system should be paid for and adopted. But the content of these tests is too often influenced by the solution itself. The content of these tests – at least at a business level could be defined much earlier. In fact, they could be derived from the requirements and ways the users intend to do business using the proposed system (i.e. their new or existing business processes). The essential content of examples is re-usable as tests of the business requirements and business processes from which they are derived. Demonstration by example IS testing. (One could call them logical tests, as compared with the physical tests of the delivered system).

The potential benefits of such an approach are huge. The requirements and processes to be used are tested by example. Customer confidence and trust in these requirements is increased. Tested, trusted requirements with a consistent and covering set of examples provide a far better specification to systems developers: concrete examples provide clarity, improve their understanding and increase their chances of success. Examples provide a trusted foundation for later system and acceptance testing so reusing the examples saves time. The level of late system failures can be expected to be lower. The focus of acceptance tests is more precise and stakeholders can have more confidence in their acceptance decisions. All in all, a much improved state of affairs.

Achieving this enlightened state requires an adjustment of attitudes and focus by customers, and systems development teams. I am using the Test Axioms (http://testaxioms.com) to steer this vision and here are the main tenets of it:

  1. Statements of Requirements, however captured, cannot be trusted if they are fixed and unchanging.
  2. Requirements are an ambiguous, incomplete definition of business needs. They must be supported by examples of the system in use.
  3. Requirements must be tested: examples are derived from the requirements and guided by the business process; they are used to challenge and confirm the thinking behind the requirements and processes.
  4. Requirements, processes and examples together provide a consistent definition of the business need to be addressed by the system supplier.
  5. The business-oriented approach is guided by the Stakeholder and Design Axioms.
  6. Examples are tests: like all tests, they have associated models, coverage, baselines, prioritisations and oracles.
  7. Business impact analyses during initial development and subsequent enhancement projects are informed by requirements and examples. Changes in need are reflected by changes in requirements and associated examples.
  8. Tests intended to demonstrate that business needs are met are derived from the examples that tested the requirements.
  9. Requirements and examples are maintained for the lifetime of the systems they define. The term ‘Live Specs’ has been used for this discipline.

If this is the vision, then some interesting questions (and challenges) arise:

  • Who creates examples to test requirements? Testers or business analysts?
  • Does this approach require a bureaucratic process? Is it limited to large structured projects?
  • What do examples look like? How formal are they?
  • What automated support is required for test management?
  • How does this approach fit with automated test execution?
  • What is the model for testing requirements? How do we measure coverage?
  • How do changing requirements and examples fit with contractual arrangements?
  • What is the requirements test process?
  • How do we make the change happen?
I’ll be discussing these and other questions in subsequent essays.

Tags: #Essaysontestdesign #examples

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account