Test Engineering Blogs

Reader

Read the latest posts from Test Engineering Blogs.

from Paul Gerrard

#teststandard

Introduction

A long time ago in a place far far away ... I got involved with an initiative to create a software testing standard eventually published as British Standard 7925.

You can download the Draft Standard(s) from here.

These are some recollections of the years 1993 to 1998. I make no promise to get any of these memories 100% correct. (I haven't consulted any of the other participants in the story; I am writing this essay on impulse). They are mostly impressions, and impressions can be wrong. But I was there and involved at least.

As the Standard explains, this initiative started in January 1989 with a Standards Working Party (SWP) formed from members of the Specialist Interest Group in Software Testing (SIGIST, now BCS SIGIST). I don't know the detail of what actually happened but, in July 1992, a draft document was produced, and after some use of it in a few places, no one seemed very happy with it.

After a period, some of the members of the SWP decided to have another attempt. There were invites to get others involved through the SIGIST network and newsletter, and I was one of the 'new intake' to the SWP.

A New Start

In January 1993, a reformed SWP re-started from scratch, but retained the existing document as a source – some of it might be reusable, but the scope, overall direction and structure of the document needed a re-think.

PA Consulting generously offered the SWP free use of one of their meeting rooms in a plush office in Victoria, London, and provided nice coffee and generous bowls of fresh fruit, I recall. The number of participants in the group varied over time, averaging 10-12, I guess. Our monthly meetings typically had 8-10 people involved.

My old friend, Stuart Reid of Cranfield University at the time, led the effort – he had experience of working with standards bodies before. Other members (who I would name if I could remember them all) worked for organisations including National Physical Laboratory, Safety-Critical Systems Club, IPL, Praxis. I was a consultant at Systeme Evolutif a UK testing services company – which became Gerrard Consulting some years later. And so on.

I recall I was in a minority – I was one of a small number of consultants working in middle of the road IT at the time, and not safety-critical or high integrity systems. We were sometimes at odds with the other SWP-ers, but we learnt a lot in the meantime. We got on pretty well.

Early Direction

The original goal of the SWP was to come up with consistent, useful definitions of the main test design techniques that had been described in books like Glenford J Myers' Art of Software Testing – 1st Edition and Boris Beizer's Software Test Techniques.

At the time, there were very few books on software testing, although there were academics who had published widely on (mostly) structural test techniques such as statement, branch/decision testing and more stringent code-based approaches, as well as functional techniques such as combinatorial-, logic-, state- and X-Machine-based test design.

The descriptions of these techniques were sometimes a little imprecise and in conflict at times. So our plan was to produce a set of consistent descriptions of these.

From the eventual standard:

“The most important attribute of this Standard is that it must be possible to say whether or not it has been followed in a particular case (i.e. it must be auditable). The Standard therefore also includes the concept of measuring testing which has been done for a component as well as the assessment of whether testing met defined targets.”

Now this might not have been at the forefront of our minds at the start. Surely, the goal of the standard is to improve the testing people do? At any rate, the people who knew about standards got their way.

It seemed obvious that we would have to define some kind of process to give a context to the activity of test design so we decided to look only at component-level testing and leave integration, system and acceptance testing to another standards efforts. We would set the path for the other test phase standards and leave it there. (Yes, we really thought for a while we could standardise ALL testing!)

Some Progress, with Difficulty

So, we limited the scope of the standard to unit, component, program, module, class testing. Of course we had to define what a component was first. Our first challenge. We came up with:

Component: “A minimal software item for which a separate specification is available”

and:

Component testing: “The testing of individual software components. After [IEEE]”

That was kind of easy enough.

After some debate, we settled on a structure for the standard. We would define a Component Test Process, introduce each functional and structural test technique in two dimensions: as a technique for test design and as a technique for measurement (i.e. coverage).

All we had to do now was write the content, didn't we?

Difficulties and the Glossary

I think at that point, all our work was still ahead of us. We made some progress on content but got into long and sometimes heated debates. The points of dispute were often minor, pedantic details. These were important, but in principle, should have been easy to resolve. But no. it seemed that, of the seven or eight people in the room, two or three were disgruntled most of the time. We took turns to compromise and be annoyed, I guess.

From my perspective, I thought mostly, we were arguing over differing interpretations of terms and concepts. We lacked an agreed set of definitions and that was a problem, for sure.

In a lull in proceedings, I made a proposal. I would take the existing content, import it into a MS Access database, write a little code, and scan it for all the one, two, three and four word phrases. I would pick out those phrases that looked like they needed a definition and present the list at the next meeting. It took me about a day to do this. There were about 10,000 phrases in our text. I picked out 200 or so to define, presented them and the group seemed content to agree definitions as and when the conversation used a term without one. This seemed to defuse the situation and we made good progress without the arguing.

This was probably my most significant contribution. The resulting Glossary became BS 7925-1 (small fanfare please).

Cosmic radiation

Process, Damned Process

The process was not so hotly debated, but why damned? Some of the group thought it would be uncontroversial to define a process for Component Testing. But some of us (the middle-of-the-road brigade) had fears that a process, no matter how gently defined and proposed, could trigger negative responses from practitioners to reject the standard out of hand.

We adopted a minimalist approach – there is little detail in the process section except a sequential series of activities with multiple feedback loops. There is no mention of writing code or amending code to fix bugs. It just explains the sequence and possible cycles of the five test activities.

It caused upset all the same – you can't please all the people all the time, and so on. But see my views on it later.

Finishing the Drafts

I have to say, once the structure and scope of the of the Standard was defined, and the Glossary definition process underway, I stepped back somewhat. I'm not a good completer-finisher. I reviewed content but didn't add a lot of material to the final draft. The group did well to crank out a lot of the detail and iron out the wrinkles.

Submission to the British Standards Institute

Stuart managed the transition from our SWP to the standards body. Trusting that standards can be expensive to purchase, once published, we created a final draft and made it publicly available for free download. That final draft went to the BSI.

The SWP wrote the standard. The BSI appointed a committee (IST/15) to review and accept it.

IST/15 Committee

You can see the involved parties are the sort of august organisations who might implement a Component Test Standard.

Legacy

It's just over 25 years since the standard was published on 15 August, 1998.

In April 2001, I set up a website (http://testingstandards.co.uk) which hosted downloadable copies of the (SWP Draft) standards. The site was managed by some colleagues from the SWP and others interested in testing standards at the time. Some progress was made (in some non-functional testing) but that work kind of fizzled out after a few years. I keep the website running for sentimental reasons, I suppose. It is neglected, out of date and most of the external links don't work. but the download links above do work and will continue to do so.

BS 7925-1 and -2 have been absorbed into/replaced by ISO 29119. The more recent standard 29119 caused quite a stir and there was an organised resistance to it. I fully agree that standards should be freely available but the response to it from people who are not interested or believe in standards was rather extreme and often very unpleasant.

Let standards be, I say. If people who see value in them want to use them, let them do so. Free country and all that. I don't see value in large overarching standards myself, I must say, but I don't see them as the work of the devil.

Anyway.

BS 7925 – a Retrospective

BS 7925-1 – “Vocabulary” - provided a glossary of some 216 testing terms used in the Component Testing Standard.

7925-1 some terms defined

Skimming the glossary now, I think the majority of definitions are not so bad. Of course, they don't mention technology and some terms might benefit from a bit of refinement, but they are at least (and in contrast to other glossaries) consistent.

BS 7925-2 – “Component Testing” - provided a process and definitions of the test design techniques.

7925-2 component testing

Component Test process

The final draft, produced by the SWP and the eventual standard contained some guidelines for the process that place it squarely in the context of a waterfall or staged development approach. At the time Extreme Programming (1999) and the Agile Manifesto (2001) lay in the future.

Does this mean the process was unusable in a modern context? Read the process definition a little more closely and I think it could fit a modern approach quite well.

7925 Component Test Process

Forget the 'document this' and 'document that' statements in the description. A company wide component test strategy is nonsense for an agile team with two developers. But shouldn't the devs agree some ground rules for testing before they commit to TDD or a Test-First approach? These and other statements of intent and convention could easily be captured in some comments in your test code.

Kent Beck (who wrote the seminal book on TDD) admits he didn't invent the test-first approach but discovered it in an ancient programming book in the late 1990s. Michael Bolton helpfully provides an example – perhaps the first(?) – description of Test-First as an approach here.

We designed the component test process to assume a test-first approach was in place and it is an iterative process too – the feedback loops allow for both. Test-First might mean writing the entirety of a test plan before writing the entirety of the component code. But it could just as easily refer to a single check exercising the next line of code to be added.

So I would argue that with very little compromise, it supports TDD and automated testing. Not that TDD is regarded as a 'complete' component test approach. Rather, that the TDD method could be used to help to achieve required levels of functional and structural coverage, using good tools (such as a unit test framework and code coverage analyser).

I also appreciate that for many developers it doesn't add anything to a process they might already know by heart.

Test Techniques

Regarding the technique definitions, pretty much they stand as a concise, and coherent set of descriptions. Annex B provides some good worked examples of the techniques in use. Of course, there are deeper/academic treatments of some, perhaps all of these techniques. For example, Kaner, Radmanabhan and Hoffman wrote the 470 page “Domain Testing Workbook” – a generalisation of the equivalence partitioning and boundary value techniques in 2003.

If you want a description of techniques that presents them in a way that emphasises their similarity as examples of the concept of model-based test design and measurement, you would do well to find a better summary.

In Conclusion

This blog has been a trip down a (fallible) memory lane describing events that occurred from 34 to 25 years ago.

I am not suggesting the standard be revived, reinstated, brought up to date, or ever used. But I wanted to emphasise two things in this essay:

  1. The BS 7925 Standards effort was substantial – overall, it took nine years from idea to completion. The second, more productive phase, took about 4 years. I am guessing that effort was between 300 and 500 man days for the SWP alone. People gave their time in good faith and for no reward.

  2. The challenges of programming and testing have not changed that much. The thinking, debate, ideals aspired to and compromises made in the late 1990s to create a component test standard are much the same.

Even so, do not think that creating a workable Component Test Standard for modern environments would be a simple task.

 
Read more...

from Paul Gerrard

First published 29/06/2015

Big Data seems to be the latest buzzword that seems to be trending.

The term has been around for a while but now, the largest corporations are promoting Big Data products and services very strongly, so something big is on the horizon. Right now, it still looks like a load of hype, but scratching beneath the surface, it seems to me that it has the potential to affect every person in society and there's no getting away from it. What is all the fuss about?

Big Data isn't really just about 'big'. Depending on who you ask, mnemonics “V3” or “V4” summarise it well.

  • Volume – is the quantity – and it's big.
  • Velocity – the rate of arrival/capture of data, and that's big too.
  • Variety – the sheer variety of data and formats to be used.
  • Veracity – the accuracy, truth or value of that data.
Volume and velocity are driving the technical aspects – relational is out, NoSQL (not only SQL) is in and the relational data skills out there are not enough.

Variety and veracity are the real challenges: device instrumentation, social feeds, government, location, financial, voice, image and video and all the data captured by any (and I mean ANY) device that we use or encounter or that monitors us and the gadgets we use are being stored, because some day, it might be useful to a data analyst working for a start-up, a corporate or our government.

If you don't know anything about Big Data, this session will provide a basic introduction to what's happening out there, right now.

Here is the video of the webinar I presented on 28 August 2013. I mentioned some books on data analysis and tools during the talk. They aren't listed in the video, so they are listed below.

  • Data Analysis with Open Source Tools, Philipp, K Janert – the title says it all – Python libraries (numpy, scipy, matplotlib, simpy, pycluster), R, Gnu Scientific Library (GSL), Sage, C Clustering library, Berkeley DB, SQLite
  • Python for Data Analysis, Wes McKinney – Python libraries: numpy, pandas, matplotlib, IPython, SciPy
  • Interactive Data Visualization, Scott Murray – an introduction to using the D3 (Data-Driven Document) open source JavaScript Library to present data in a myriad of interesting ways
  • Visualise This, Nathan Yau – less about tools, more about how to “tell your story with data presented in creative, visual ways”



Tags: #BigData #TestAnalytics

 
Read more...

from Paul Gerrard

First published 11/12/2015

I am grateful to Antony Edwards of TestPlant for asking me to co-present a webinar and share my responses to some of the issues raised in the PAC Study, 'Digital Testing in Europe: Strategies, Challenges and Measuring Success' [1]. In the webinar, I focus on Digital and its effect on testing and test teams and make some suggestions for how the challenge can be met.

You can watch the webinar here: http://www.testplant.com/videos/survey-of-digital-testing-by-pac-and-testplant/.

I've also written a short paper that explores some of the issues of Digital, testing and tools and you can download that here: Digital Transformation, Testing and Automation.

Below are the references in the paper.

  1. Pierre Audoin Consultants (PAC), “Digital Testing in Europe: Strategies, Challenges and Measuring Success”
  2. Top 20 Marketing Buzzwords, https://zoomph.com/blog/top-20-digital-marketing-buzzwords/
  3. 6 Strategies to Drive Customer Engagement in 2015, http://www.forbes.com/sites/forbesinsights/2015/01/29/6-strategies-to-drive-customer-engagement-in-2015/2/
  4. The CAST Report, 1999, Dorothy Graham, Paul Gerrard, 1999.
  5. A New Model for Testing, Paul Gerrard, http://dev.sp.qa/download/newModel


Tags: #ALF

 
Read more...

from Paul Gerrard

First published 09/06/2016

I read a fairly, let's say, challenging, article on the DevOps.com website here: http://devops.com/2016/06/07/devops-killed-qa/

It's a rather poor, but sadly typical, misrepresentation or let's be generous miunderstanding of the "state of the industry". The opening comment gives you the gist.

"If you work in software quality assurance (QA), it’s time to find a new job."

Apparently DevOps is the 'next generation of agile development ... and eliminates the need for QA as a separate entity'. OK maybe DevOps doesn't mandate or even require independent test teams or testers so much. But it does not say testing is not required. Whatever.

There then follows a rather 'common argument' - I'd say eccentric - view of DevOps at the centre of a Venn diagram. He then references somene elses' view that suggests DevOps QA is meant to prevent defects rather than find them but with all due respect(!) both are wrong. Ah, now we get to the meat. Nearly.

The next paragraph conflates Continuous Delivery (CD), Continuous Integration and the 'measurement of quality'. Whatever that is.

"You cannot have any human interaction if you want to run CD."

Really?

"The developers now own the responsibility rather than a separate entity within the organization"

Right. (Nods sagely)

"DevOps entails the use of vendors and tools such as BUGtrackJIRA and GitHub ..."

"To run a proper DevOps operation, you cannot have QA at all"

That's that then. But there's more!

"So, what will happen to all of the people who work in QA? One of the happiest jobs in the United States might not be happy for long as more and more organizations move to DevOps and they become more and more redundant."

Happy? Er, what? (Oh by the way, redundant is a boolean IMHO).

Then we have some interesting statistics from a website http://www.onetonline.org/link/summary/15-1199.01 I can't say I know the site or the source of data well. But it is entirely clear that the range of activities of ISTQB qualified testers have healthy futures. In the nomenclature of the labels for each activitiy the outlook is 'Bright' or 'Green'. I would have said, at least in a DevOps context that their prospects were less than optimal, but according to the author's source, prospects are blooming. Hey ho. Quote a source that contradicts one's main thesis. Way to go!

But, hold on - there really is bad news ...

"However, the BLS numbers are likely too generous because the bureau does not yet recognize “DevOps” as a separate profession at all"

So stats from an obviously spurious source have no relevance at all. That's all right then.

And now we have the killer blow. Google job search statistics. Da dah dahhhhh!

"As evidence, just look at how the relative number of Google searches in Google Trends for “sqa jobs” is slowly declining while the number for “devops jobs” is rapidly increasing:"

And here we have it. The definitive statistics that prove DevOps is on the rise and QA jobs are collapsing!

qa jobs vs devops jobs

"For QA, the trend definitely does not look good."

So. That's that. The end of QA. Of Testing. Of our voice of reason in a world of madness.

Or is it? Really?

I followed the link to the Google stats. I suggest you do the same. I entered 'Software Testing Jobs' as a search term to be added and compared on the graph and... voila! REDEMPTION!!!

Try it yourself, add a search term to the analysis. Here is the graph I obtained. I suggest you do the same. Here's is my graph:

Now, our American cousins tend to call testers and testing - QA. We can forgive them, I'm sure. But I know the term testers is more than popular in IT circles over there. So think on this:

The ratio of Testers v DevOps jobs is around five to one. Thats testers to ALL  JOBS IN DEVOPS IS FIVE TO ONE.

ARE WE CLEAR?

So. A conclusion.

  1. Don't pay attention to blogs of people with agendas or who are clearly stupid.
  2. Think carefully about the apparent sense but clear nonsense that people put on blogs.
  3. Be confident that testing, QA or whatever you call it is as important now as it was forty years ago and always will be.

It's just that the people who do testing might not be called testers. Forever.

Over and out.

VOTE REMAIN!

 



Tags: #testing #DevOps #QA #DevOpsKillingQA

 
Read more...

from Paul Gerrard

First published 06/03/2015

This blog first appeared on the EuroSTAR blog in April 2014 (http://conference.eurostarsoftwaretesting.com/2014/courage-and-ambition-in-teaching-and-learning/)

In 2013, Cem Kaner (http://kaner.com) asked me to review a draft of the ‘Domain Testing Workbook’ (DTW) written by Cem in partnership with Sowmya Padmanabhan and Douglas Hoffman. I was happy to oblige and pleased to see the book published in October 2013. At 480 pages, it’s a substantial work and I strongly recommend it. I want to share some ideas we exchanged at the time and these relate to the ‘Transfer Problem’.

In academic circles, the Transfer of Learning relates to how a student applies the knowledge they gain in class to different situations or the real-world. In the preface of DTW, the transfer problem is discussed. Sowmya and Cem relate some observations of student performance in a final exam which contained:

  • Questions that were similar to those the students had already experienced in class and homework assignments and
  • One question that required students to combine skills and knowledge in a way that was mentioned in lectures, but they had not practiced.
Almost every student handled the first type of question very well, but every student failed the more challenging question. It appears that the students were able to apply their knowledge in familiar situations, but not in an unfamiliar one. The transfer problem has been studied by academics and is a serious problem in the teaching of science in particular, but it also seems to exist in the teaching of software testing.

The ‘Transfer of Learning’ challenge is an interesting and familiar topic to me.

Like many people, in my final school year, I sat A-Level examinations. And I got to know that taking the CISA exam after A-levels wasn't my cup of tea. In my chosen subjects – Mathematics, Physics and Chemistry – the questions in exams tended to focus on ‘point topics’ lifted directly from the syllabus. The questions were similar to the practice questions on previous exams. But I also sat Scholarship or S-Level exams in maths and physics. In these exams, the questions were somewhat harder because they tended to merge two or even more syllabus concepts into one problem. These were clearly harder to answer and required more imagination and I’m tempted to say, courage, in some respects. I recall a simple example (it sticks in my mind – exam questions have a tendency to do that, don’t they?)

Now the student would be familiar with the modulus of a number |X| being the absolute or positive value. X can be a positive or negative number. And the familiar ax2 + bx + c = 0 quadratic equation that can be often solved by trial and error but always solved using the quadratic formula. The quadratic with a modulus would not be familiar, however. So this problem demands a little more care. I leave it to you to solve.

Now, this ‘harder question style’ was (and still is) used by some universities in their own entrance exam papers. I sat university entrance exams which were exclusively of this pattern. Whether it is an effective discriminator of talent – I don’t know – but I got through them, thank goodness.

But my experience with testers not being able to transfer skills to real world or more complex contexts is a manifestation of the ‘transfer problem’. It seems to me that it’s not lack of intellect that causes people to struggle with problems ‘out of a familiar context’ but I’d like to consider two attitudes to teaching and learning that we should encourage – courage and ambition. For the first, I will draw a parallel with sports coaching.

Most years, I coach rowing at my local club. In rowing and in particular sculling, if a sculler makes a mistake, they can capsize the boat, fall into the river and get rather wet, so there’s a risk and the risk makes people unwilling to commit to the correct rowing technique. Correct technique demands that firstly, the rower gets their blades off the water which leaves the rower in a very vulnerable, unstable situation. They have to learn how to balance a boat before they can move a boat quickly so they must be confident first, skilled second and then they can actually apply their power to make the boat move quickly.

It’s a bit like taking the stabilisers (training wheels) off a pushbike – it takes some confidence and skill for a beginner rider to do that. Coaching and learning tightrope walking, skiing, climbing and gymnastics are all similar.

Athlete coaching technique involves asking athletes to have courage, to trust their equipment, the correct technique and the laws of physics and to not fear the water or a fall in the snow. In fact, coaches almost force people to fail so they recognise that failure doesn’t hurt so much and in fact, they can commit knowing the consequence of failure is not so bad after all.

I remember many years ago when I was learning to ski in a class of ten people – at one point on a new slope, the whole class was having difficulty. So the ski instructor took us to an ‘easier slope’. We struggled there too, but made some progress. Then we went back to the first slope. Remarkably, everyone could ski down the first slope with ease. In fact, the ski instructor had lied to us – he took us to a harder slope to ‘go back to basics’. It turned out that it was confidence that we lacked, not the skill.

Getting people to recognise that the risk isn’t so bad, to place trust in things they know, to have courage to try and keep trying can’t be learnt from a book or online course. It takes practice in the real world, perhaps in some form of apprenticeship and with coaching, not just teaching, support. Coaches must strongly challenge the people they coach, continuously.

The best that a book can do is present the student (and teacher) some harder problems like this with worked examples. If we expect the student to fail, we should still set them this kind of problem, but then the teacher/coach has to walk through the solution, pointing out carefully, that it’s not just allowed, but that it really is essential to ‘think outside the box/core syllabus’. Perhaps even to trust their hunches.

Coaches/trainers and testers both need courage.

The test design techniques are often taught as rote-procedures whereby one learns to identify a coverage item (a boundary, a state-transition, a decision in code) and then derive test cases to cover those items until 100% coverage is achieved. There is nothing wrong with knowing these techniques, but they always seem to be taught out of context. Practice problems are based on static, unambiguous, but above all, simple requirements or code that when the student sees a real, complicated, ambiguous, unstable requirement it’s no wonder they find it hard to apply the techniques effectively – or at all.

These stock techniques are often presented as a way of preparing documentation to be used as test scripts. They aren’t taught as test models having more or less effectiveness or value for money to be selected and managed. They are taught as clerical procedures. The problem with real requirements is you need half a dozen different models on each page, on each paragraph, even.

A key aspect of exploratory testing is that you should not be constrained but should be allowed and encouraged to choose models that align with the task in hand so that they are more direct, appropriate and relevant. But the ‘freedom of model choice’ applies to all testing, not just exploratory, because at one level, all testing is exploratory. I’ve said that before as well (http://gerrardconsulting.com/index.php?q=node/588).

In future, testers need to be granted the freedom of choice of test models but for this to work, testers must hone their modelling skills. We should be teaching what test models are and how models can be derived, compared, discarded, selected and used. This is a much more ambitious goal than teaching and learning the rote-procedures that we call, rather pompously, test design techniques. I am creating a full-day workshop to explore how we use models and modelling in testing. If you are interested or have suggestions for how it should work, I’d be very interested to hear from you.

We need to be more ambitious in what we teach and learn as testers.

Tags: #teaching #learning

 
Read more...