Paul Gerrard

My experiences, opinions in the Test Engineering business. I am republishing/rewriting old blogs from time to time.

First published 27/06/2014

The nice folk at Testing Cup in Poland have posted a video of my keynote on You tube.

 



Tags: #ALF

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account

First published 13/10/2020

Nicholas Snogren posted on LinkedIn a reference to an “Axioms of Testing” presentation from 2009 and asked me to comment on his “Tenets of Software Testing”. There are some similarities but not many I think, some parallel too, but his question prompted me to give a longer response than I guess was expected. I said...

“Hi, thanks for asking for my opinion. Your tenets look interesting – and although I don't think they map directly to what I've written, they raise points in my mind that need a little airing – my mind grows cobwebby over time, and it's good to brush off old ideas. A bit like exercising muscles that haven't been used for a while haha.”

I give my response as a comparison with my Tester's Pocketbook, and Test Axioms website and presentations. (I notice that some of these posts are around 12 years old and some links don't work (anymore). Some are out of my control, others I'll have to track down and correct others – let me know if you want that.)

Tenets v Axioms

Firstly, let's get our definitions right.

According to dictionary.com, a Tenet is “any opinion, principle, doctrine, dogma, etc., especially one held as true by members of a profession, group, or movement.” Tenets might be regarded as beliefs that don't require proof and don't provide a strong foundation.

From the same source, an Axiom is, “i) a self-evident truth that requires no proof. ii) a universally accepted principle or rule. iii) Logic, Mathematics. a proposition that is assumed without proof for the sake of studying the consequences that follow from it”

I favoured the use of Axioms as a foundation for thinking in the testing domain. Axioms, if they are defensible, would provide a stronger foundational set of ideas. When I proposed a set of Testing Axioms, there was some resistance – here's a Prezi talk that introduces the idea.

James Bach in particular challenged the idea of Axioms and suggested I was creating a school of testing (when schools of testing were getting some attention) here.

By and large, by defining the Axioms in terms that are context-neutral, challenges have tended to be easy to acknowledge, disarm and set aside. Critics, almost entirely from the context-driven school, jumped the gun so to speak – they clearly hadn't read what I had written at the time before critiquing. Only one or two people responded to James' call to arms to criticise the Axioms and challenged them.

The Axioms are fully described in The Tester's Pocketbook – http://testers-pocketbook.com/.

The Axioms of Testing website – https://testaxioms.com/ – sets out the Axioms with some explanation and provides around 50% of the pocketbook content for free.

Axioms caught the attention (and criticism) of people because I pitched them as universal principles or laws of testing. Tenets, being less strident in their definition might not attract attention (or criticism) in the same way.

Immediate Comments on the Tenets

The Tenets are numbered and italicised. My comments in plain text.

  1. A software product’s behavior is exhibited by interactions.
  2. There is potentially an infinite number of possible behaviors in software.

These are properties of software. I'm not sure what 1 says other than behavior is triggered by interactions and presumably observed through interactions. Although a lot of software behaving autonomously might respond to internal events such as the passing of time and might not exhibit any behaviour through interactions e.g. a change of internal state. I'm not sure 1 says much.

Tenet 2 is reasonable for reasonably-sized software artefacts.

In the Tester's Pocketbook, I hardly use the term software. I prefer that we test systems. Software is usually an important part of every system. Humans do not interact with software (except by reading or writing it). Software exists in the context of program compilation, hosted on operating systems, running on devices which have peripherals and other interconnected systems which may or may not have user interfaces.

Basing Axioms on Systems means that the Axioms are open to interpretation as Axioms of testing ANY system (i.e. anything. I don't press that idea – but it's an attractive one). Another 'benefit' is that all of the Systems Thinking principles can also be brought to bear on our arguments. Outside its context, Software is not a System.

3. Some of those behaviors are potentially negative, that is, would detract from the objectives of the software company or users.

I use the term Stakeholders to refer to parties interested in the valuable, reliable behavior of systems and the outcome and value of testing those systems.

4. The potentiality for that negative behavior is risk.

OK, but it could be better worded. I would simply say 'potential modes of failure' rather than negative behaviour.

5. It’s impossible to guarantee a lack of risk as it’s impossible to experience an infinite number of behaviors.

Not really. You can guarantee a no-risk situation if no one cares or no one cares enough to voice their concerns before testing (or after testing). There is always the potential for failure because systems are complex and we are not skilled enough to create perfect systems.

6. Therefore a subset of behaviors must be sampled to represent the risk.

Rather than represent, I would say trigger the failure(s) of concern to explore the risk and better inform a risk-assessment.

7. The ability to take an accurate sample, representative of the true risk, is a testing skill.

Not sure what you mean by sample – tests or test cases, I presume? Representative is a subjective notion, surely; 'true' I don't understand; and a testing skill would need more definition than this, wouldn't it?

8. A code change to an existing product may also affect the product in an infinite number of ways.

I'd use 'ANY' change, to a 'SYSTEM'. Why 'also'? What would you say fits into a 'not only.... but also...' clause? But I'm not sure I agree with this assertion anyway. A code change changes some software artefact. The infinite effects (faulty behaviors?) derive from infinite tests (or uses in production) – which you say in 5 is impossible to achieve. I'm not sure what you're trying to say here.

9. It is possible to infer that some behaviors are more likely to be affected by that change than others.

You can infer anything you like by calling upon the great Unicorn in the sky. How will you do this? Either you use tools which are limited in capability or you might use change and defect history or you might guess based on partial knowledge and experience.

10. The risk -of that change- is higher within the set of behaviors that are more likely to be affected by that change.

Do you mean probability of failure or the consequence of failure? I assume probability. At any rate, this is redundant. You have already asserted this in 9. But it's also more complicated than this – a cosmetic defect on an app can be catastropic and a system failure negligible at times.

11. The ability to accurately estimate a scope of affected behavior is another testing skill.

I would call this the skills of impact analysis rather than testing. Developers are relatively poor at this, even having a far deeper technical knowledge (either they aren't able or lack the time to impact-analyse to any reliable degree). So we rely on testing to catch regressions which is less than ideal. Testers depend on their experience rather than system internals knowledge. But, since buggy systems behave in essentially unpredictable ways, we must admit our experience is limited and fallible. It's not a 'skill' that I would dwell on.

12. The scope and sampling ideas alone are meaningless without empirical evidence.

The scope and sampling ideas have meaning regardless of whether you implement them. I suppose you might say they are useless ideas if you don't gather evidence.

13. Empirical evidence is gathered through interactions with the product, observation of resultant behavior, and assessment of those observations.

The word empirical is redundant. I would use the word 'some' here. We also get evidence from operation in production, for example. (Unless you include that already?)

14. The accuracy and speed of scope estimation, behavior sampling, and gathering of evidence are key performance indicators for the tester.

If you are implying 13 are tester skills, I suppose you could make this assertion. But you haven't said what the value of evidence is yet. Is the purpose of testing only to evaluate the performance of testers? Hope not ;O)

15. Heuristics for the gathering of such evidence, the estimation of scope, and the sampling of behavior are defined in the Heuristic Test Strategy Model.

Heuristics are available in a wide range of sources including software, systems and engineering standards. Why choose such a limited source?

Inspiration for the Tenets

These tenets were inspired by James Bach’s “Risk Gap” and Doug Hubbard’s book “How to Measure Anything.” Both Bach and Hubbard discuss a very similar idea from different spaces. Hubbard suggests that by defining our uncertainty, we can communicate the value of reducing the uncertainty. Bach describes the “knowledge we need to know” as the “Risk Gap.” This Risk Gap is our uncertainty, and in defining it, we can compute the value of closing it. In testing, I realized we have three primary areas of uncertainty: 1) what is the “risk gap,” or knowledge we need to find out, 2) how can we know when we’ve acquired enough of that unknown knowledge, and 3) how can we design interactions with the program to efficiently reveal this knowledge.

There are several interesting anomalies to unpick here:

  • I recall James telling a story about Tom Gilb asserting anything could be measured. James suggested Love and Tom obliged. I don't think James was impressed.
  • 'Defining uncertainty' – how do you do that reliably? Numerically? Objectively? We can put any numbers we like against probability and consequence. Being certain, with or without evidence, is always subjective. People can say they are more certain, but based on ... what? How do we correlate data with a human emotion and use that to make engineering decisions? People can be easily deceived – by themselves, not just by others. Consider this, for example, and this.
  • Risk Gap – how is a quantity of knowledge measured? What units? With what certainty? These are aspects that James has argued against since the early 1990s.
  • Your three challenges 1), 2)and 3) are reasonable as goals. How do Bach and Hubbard argue you achieve them, if not by calling on the subjective opinions of other people?

Some More General Comments

You seem to be trying to 'make a case' for testing as a tool to address the risk of failure in systems. I (like and) use that same approach in a rounder sense in my conference talks and writings, when practicable. My observations on this are:

  1. The logic doesn't flow as it should because of flaws in the individual statements
  2. You have no pre-definition of test, testing or its purpose at the outset, so it's not clear what your destination is
  3. There's no defined goal of testing, other than to gather evidence to (my words) reassess risks and thereby reduce uncertainty (but you don't say why that's a 'good thing')
  4. Testing enables a reassessment of risk, but that reassessment may increase risk if, for example, you find a bug in something that was previously deemed reliable. (there's a bigger conversation to be had, but risk is not a BAD thing, it's the barrier(s) you need to navigate or break through to gain your REWARD).
  5. Extant, significant risks are a BARRIER to accepting or releasing systems. As such, the goal of testing is to provide evidence that to the people who make the decision, those risks are acceptable or negligible (reducing uncertainty, sure but never eliminating it). But the ultimate goal of testing is to show that the system WORKS. Encountering failures is a detour from that goal. The testing goal is broader than exploring risks.
  6. You don't mention stakeholders at all. Why do we test? To provide evidence to testing stakeholders – our customers – so they can make better-informed decisions.

Summary

I don't want to give the impression that I'm criticising Nicholas or am arguing against the concept of Tenets or Principles or Axioms of testing. All I have tried to do is offer reasonable criticism of the Tenets to show that is a) extremely difficult to postulate bullet-proof Tenets, Principles or Axioms and b) it is extremely easy to criticise such efforts by:

  • Exposing flaws in the language used and the logic in an argument that C follows B follows A etc.
  • Identifying implicit assumptions of meaning, scope, dependency and authority and
  • Offering examples of context that contradict, or expose flaws in, the statements made.

I do this because I have been there many times since 2008 and occasionally have to defend the Test Axioms from such criticisms. I have to say, Critical Thinking is STILL a rare skill – I wish criticism were more often proffered as a result of it.

References

  1. The Tester's Pocketbook – http://testers-pocketbook.com/
  2. Axioms of Testing website – https://testaxioms.com/


Tags: #ALF #TestAxioms #Tenets #CriticalThinking

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account

First published 08/08/2014

The nice folk at Testing Cup in Poland have posted a video of my keynote on You tube.

I originally gave a EuroSTAR keynote in 2002 titled 'What is the Value of Testing and how can we improve it?'. This talk brings the topic up to date but also introduces the New Model Testing that I'm working on at the moment. 

I've written a  paper that describes New Model Testing that can be downloaded here.



Tags: #valueoftesting #NewModelTesting #TestingCup

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account

First published 10/04/2014

I'm working with Lalitkumar who edits the Tea Time With Testers online magazine. It has a large circulation and I've agreed to write an article series for him on 'Testing the Internet of Everything'. I'll also be presenting webinars to go with the articles, the first of which is here: https://attendee.gotowebinar.com/register/1854587302076137473 It takes place on Saturday 19 April at 15.30pm. An unusual time – but there you go.

You can download the magazine from the home page here: teatimewithtesters.com/

Lalit has asked for questions on the article and I'll respond to these during the webinar. But questions on a more broad range of testing-related subjects, I'll write a response for the magazine. But I'll also blog these questions and answers here.

Questions that result an interesting blog will receive a free Tester's Pocketbook - if you go through the TTWT website and contact Lalit - anything goes. I look forward to soem challenging questions :O)

The first Q&A Will appear shortly...



Tags: #TTWT #TeaTimewithTesters #IOE #IOT

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account

First published 23/10/2013

This article appeared in the October 2013 Edition of Professional Tester magazine. To fit the magazine format, the article was edited somewhat. The original/full article can be downloaded here.

Big Data

Right now, Big Data is trending. Data is now being captured at an astonishing speed. Any device that has a power supply has some software driving it. If the device is connected to a network or the internet, then the device is probably posting activity logs somewhere. The volumes being captured across organisations are huge – databases of Petabytes (millions of Gigabytes) of data are springing up in large and not so large organisations. Traditional, relational technology simply cannot cope. Mayer-Schonberger and Cukier argue in their book, “Big Data” [1], it’s not that data is huge, it’s that, for all business domains, it seems to be much bigger than we collected before. Big Data can be huge, but the more interesting aspect of Big Data is its lack of structure. The change in the philosophy of Big Data is reflected in three principles.
  1. Traditionally, we have dealt with samples (because the full data set is large), and as a consequence we have tended to focus on relationships that reflected cause and effect. Looking at the entire data set allows us to see details that we never could before.
  2. Using the full data set releases us from the need to be exact. If we are dealing with data points in the tens or hundreds, we focus on precision. If we deal with thousands or millions of data points, we aren’t so obsessed with minor inaccuracies like losing a few records here and there.
  3. We must not be obsessed with causality. If the data tells us there is a correlation between two things we measure, then so be it. We don’t need to analyse the relationship to make use of it. It might be good enough just to know that the number of cups of coffee bought by product owners in the cafeteria correlates inversely with the number of severity 1 incidents in production. (Obviously, I made that correlation up – but you see what I mean). Maybe we should give the POs tea instead?
The interest in Big Data as a means of supporting decision-making is rapidly growing. Larger organisations are creating teams of so-called data scientists to orchestrate the capture of data and analyse it to obtain insights. The phrase ‘from insight to action’ is increasingly used to summarise the need to improve and accelerate business decision-making.

‘From Insight to Action’Activity logs tend to be captured as plain text files with fields delimited by spaces, tabs or commas or as JSON or XML formatted data. This data does not appear validated, structured and integral as it would be in a relational table – it needs filtering, cleaning, enriching as well as storing. New tools designed to deal with such data are becoming available. A new set of data management and analysis disciplines are also emerging. What opportunities are out there for testing? Can the Big Data tools and disciplines be applied to traditional test practices? Will these test practices have to change to make use of Big Data? This article explores how data captured throughout a test and assurance process could be merged and integrated with definition data (requirements and design information) and production monitoring data and analysed in interesting and useful ways.

The original/full article can be downloaded here.

Tags: #continuousdelivery #BigData #TestAnalytics

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account

First published 27/03/2013

Did you know? We’re staging some webinars

Last night, we announced dates for two webinars that I will present on the subject, “Story-Based Test Automation Using Free Tools”. Nothing very exciting in that, except that it’s the first time we have used a paid-for service to host our own webinar and marketed that webinar ourselves. (In the past we have always pitched our talks through other people who marketed them).

Anyway, right now (8.40 PM GMT and less than 24 hours since we started the announcements) we have 96 people booked on the webinar. Our GoToWebinar account allows us to accept no more than 100. Looks like a sell-out. Great.

Coincidentally, James Bach and Michael Bolton have revisited and restated their positions on the “testing versus checking” and “manual versus automated testing” dichotomies (if you believe they are dichotomies, that is). You can see their position here: http://www.satisfice.com/blog/archives/856.

I don’t think these two events are related, but it seemed to me that it would be a good time to make some statements that set the scene for what I am currently working on in general and the webinar specifically.

Business stories and testing

You might know that we (Gerrard Consulting) have written and promoted a software development method (http://businessstorymethod.com) that uses the concept of business stories and have created a software as a service product (http://businessstorymanager.com) to support the method. The method is not a test method, but it obviously involves a lot of testing. Testing that takes place throughout the development process – during the requirements phase, development phase, test phase and ongoing post-production phases.

Business stories are somewhat more to us than ‘a trigger for a conversation’, but we’ll use the term ‘stories’ to refer to them from now on.

In the context of these phases, the testing in scope might be called by other names and/or be part of processes other than ‘test’. Requirements prototyping, validation, (Specification by Example/Behaviour-Driven Development/Acceptance Test Driven Development/ Test-Driven Development – take your pick), feature-acceptance testing, system testing, user-testing and regression testing during and after implementation and go-live.

There’s quite a lot of this testing stuff going on. Right now, the Bach-Bolton dialogue isn’t addressing all of this in a general way, so I’m keeping a watching brief on events in that space. I look forward to a useful, informant outcome.

How we use (business) stories

In this blog, I want to talk specifically about the use of stories in a structured domain-specific language (using, for example Gherkin format (see https://github.com/cucumber/gherkin) to example (and that is a KEY word) requirements. I’m not interested in the Cucumber-specific extensions to the Gherkin syntax. I’m only interested in the feature heading (As a…/I want…/So that…) and the scenario structure (given…/when…/then…) etc. and how they are used to test in a broader sense:

  • Stories provide accessible examples in business language of features in use. They might be the starting point of a requirement, but usually not a full definition of a requirement. Without debating whether requirements can ever be complete, we argue that Specification by Example is not (in general) possible or desirable. See here: http://gerrardconsulting.com/index.php?q=node/596
  • If requirements provide definitions of behaviour in a general way, stories can be used to create examples of features described in requirements that are specific and, if carefully chosen, can be used to clarify understanding, to prototype behaviours and validate requirements in the eyes of stakeholders, authors and recipients of requirements. We describe this process here: http://gerrardconsulting.com/index.php?q=node/604
  • Depending on who creates these stories and scenarios and for what purpose, these scenarios can be used to feed a BDD, ATDD or Specification by Example approach. The terminology used in these approaches varies, but a tester would recognise them as a keyword-driven approach to test automation. Are these automated scenarios checks or tests? Probably checks. But these automated checks have multiple goals beyond ‘defect-detection’.

Story-based testing and automation

You see, the goals of an automated test (and let me persist in calling them tests for the time being) varies and there are several distinct goals of story-based scenarios as test definitions.

In the context of a programmer writing code, the rote automation of scenarios as tests gives the programmer a head start in their test-driven development approach. (And crafting scenarios in the language of users segues into BDD of course). The initial tests a programmer would have needed to write already exist so they have a clearer initial goal. Whether the scenarios exist at a sufficiently detailed level for programmers to use them as unit-tests is a moot point and not relevant right now. The real value of writing tests and running them first derives from:

  1. Early clarification of the goal of a feature when defined
  2. Immediate feedback of the behaviour of a feature when run
  3. When the goal is understood and the tests pass, then the programmer can more safely refactor their code
Is this testing? 2 is clearly an automated test. 3 is the reusable regression test that might find its way into a continuous integration and test regime. These tests typically exercise objects or features through a technical API. The user interface probably won’t be exercised.

There is another benefit of using scenarios as the basis of automated tests. The language of the scenario (which is derived from the businesses’ language in a requirement) can be expected to be reused in the test code. We can expect (or indeed mandate) the programmer to reuse that language in the naming of their variables and objects in code. The goals of Ubiquitous Language in systems (defined by Eric Evans and nicely summarised by Martin Fowler here http://martinfowler.com/bliki/UbiquitousLanguage.html) are supported.

Teams needing to demonstrate acceptance of a feature (identified and defined by a story), often rely on manual tests executed by the user or tester. The tester might choose to automate these and/or other behaviour or user-oriented tests as acceptance regression tests.

Is that it? Automated story tests are ‘just’ regression tests? Well maybe so.

The world is going 'software as a service' and the development world moves closer to continuous delivery approaches every day. The time available to do manual testing is shrinking rapidly. In extremis, to avoid bottlenecks in the deployment pipeline (http://continuousdelivery.com/2010/02/continuous-delivery/) there may be time only to perform cursory manual testing. Manual, functional testing of new features might take place in parallel with development and automation of functional tests must also happen ahead of deployment because automated testing becomes part of the deployment process itself. Perhaps manual testing becomes a test-as-we-develop activity?

But there are two key considerations for this high-automation approach to work:

  1. I’ve said elsewhere that Continuous Delivery is a beast that eats requirements (http://gerrardconsulting.com/index.php?q=node/608) and for CD to work, then the quality of requirements must be much higher than we are accustomed to. We use the term trusted requirements. You could say, tested and trusted. We, and I mean testers mostly, need to validate requirements using stories so the developers receive both trusted requirements and examples of features in use. Without trusted requirements, CD will just hit a brick wall faster.
  2. Secondly, it seems to me that for the testers not to be a bottleneck, then the manual checking that they do must be eliminated. Whichever tests can be automated should be. The responsibility for automation of checking must move from being a retrospective activity to possibly a developer activity. This will free the manual testers to conduct and optimise their activity in the short time they have available.
  3. There are several spin-off benefits of basing tests on stories and scenarios. Here’s two: if test automation is built early, then all checks can take advantage of it; if automation is built in parallel with the software under test, then the developers are much more likely to consider the test automation and build the hooks to allow it to operate effectively. The continuous automated testing provides the early warning system of continuous delivery regimes. These don't 'find bugs', rather they signal functional equivalence. Or not.

    I wrote a series of four articles on 'Anti-Regression Approaches' here: http://gerrardconsulting.com/index.php?q=node/479. What are the skills of setting up regression test regimes? Not necessarily the same as those required to design functional tests. Primarily, you need automation skills and a knowledge of the internals of the system under test. Are these testing skills? Not really. They are more likely to be found in developers. This might be a good thing. Would it not be best to place responsibility for regression detection on those people responsible for introducing regressions? Maybe developers can do it better?

    One final point. If testers are allowed (and I use that word deliberately) to test or validate requirements using stories in the way we suggest, then the quality of requirements provided to developers will improve. And so will the software they write. And the volume of testing we are currently expected to resource will reduce. So we need fewer testers. Or should I say checkers?

    This is the essence of the “redistributed testing” offer that we, as testers, can make to our businesses.

    The webinar is focused on our technical solution and is driven by the thinking above.

    Last time I looked we had 97 registrants on the 4th April Webinar. If you are interested, the 12th April webinar takes place at 10 AM GMT – you can register for it here: https://attendee.gotowebinar.com/register/4910624887588157952

    Tags: #testautomation #businessstorymethod #businessstories #BusinessStoryManager #BDD #tdd #ATDD

    Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account

First published 30/04/2013

Huib Schoots asked me to name my top 10 books for testers. Initially, I was reluctant because I'm no fan of beauty parades and if an overall 'Top 10 Books' were derived from voting:

 

  1. The same old same old books would probably top the chart again, and
  2. Any obscure books I nominated would be lost in the overall popularity contest.

 

So what's the point?

 

Nassim Taleb tweeted, just the other day: "You should never allow knowledge to turn into a spectator sport, by condoning rankings, awards, "hotness" &"incentives"" (3rd ethical rule)

— Nassim N. Taleb (@nntaleb) April 28, 2013

Having said that, and with reservations, I sent Huib a list. What books would I recommend to testers? I’m still reluctant to recommend books because it is a personal choice. I can’t say whether anyone else would find these useful or not. But, scanning the bookshelf in my study, I have chosen books that I like and think are *not* likely to be on a tester’s shelf. These are not in any order. My comments are subjective and personal.

  1. Zen and the Art of Motorcycle Maintenance, Robert M Pirsig – Quality is for philosophers, not systems professionals. I try not to use the Q word.
  2. Software Testing Techniques, Boris Beizer - for the stories and thinking, more than the techniques stuff (although that’s not bad).
  3. The Complete Plain Words, Sir Ernest Gower – Use this or your language/locale equivalent to improve your writing skills.
  4. Test-Driven Development, Kent Beck – is TDD about test or design? Does it matter? Testing is about *much* more than finding bugs.
  5. The Tester’s Pocketbook, Paul Gerrard – I wrote this as an antidote to the ‘everything is heuristic’ idea, which I believe is limiting and unambitious.
  6. Systems Thinking, Systems Practice, Peter Checkland – an antidote to the “General Systems Theory” nonsense.
  7. The Tyranny of Numbers, Why Counting Can’t Make Us Happy, David Boyle – The title says it all. Entertaining discussion of targets, metrics and misuse.
  8. Straight and Crooked thinking, Robert H Thouless – Title says it all again. Thouless wrote another book 'Straight and Crooked Thinking in Wartime' 70-odd years ago - a classic.
  9. Logic: an introduction to elementary logic, Wilfred Hodges – Buy this (or a similar) book on predicate logic: few understand what it is, but it’s a cornerstone of critical thinking.
  10. Lectures on Physics, Richard P Feynman – It doesn’t matter whether you understand the Physics, it’s the thought processes he exposes that matters. RPF doesn't do bullsh*t.

11. Game Theory and Economic Behaviour, John von Neuman, Oskar Morgenstern – OK it’s no 11. Don’t read this. Be aware of it. We have the mathematics for when we finally collect data that people can *actually* use to make decisions.

There are many non-testing books that didn't make my list. Near misses include at least one scripting/programming language book, all of Edward Tufte's books, Gause & Weinberg's "Exploring Requirements", DeMarco and Lister's "Peopleware", DeMarco's "Controlling Software Projects" and "Structured Analysis", Nancy Leveson's "Safeware", Constantine and Lockwood's "Software for Use" and lots more. Ten books isn't enough. Perhaps one hundred isn't enough either.

Huib asked me to explain my "General Systems Theory" nonsense comment. OK, nonsense may be too strong. Maybe I should say non-science?

The Systems movement has existed for 70-80+ years. Dissatisfaction with General Systems Theory in 60s/70s caused a split between the theorists and those who wanted to evolve System Thinking into a problem solving approach. Maybe you could call them two ‘schools’? GST was attacked very publicly by the problem-solving movement (the Thinkers) who went down a different path. GST never delivered a ‘grand theory’ and appeared to exist only to publish a journal. The thinker movement proceeded to work in areas covering hard- and soft-systems and decision making support. It's clear that the schism between the two streams caused (or was caused by) antagonism. It is reminiscent of the testing schools 'debate'. A typical criticism in the 1970s was one expert (Naughton) chided another expert (Lilienfield (who attacked GST)) by saying “there was nothing approaching a coherent body of tested knowledge to attack, rather a melange of insights, theorems, tautologies and hunches…”. The problem that the 'Thinkers' had with GST is that most of the insights and theorems are unsubstantiated and not used for anything. GST drifts into the mystical sometimes - it’s almost like astrology. Checkland says: “The problem with GST is it pays for its generality with lack of content” and "... the fact that GST has failed in its application does not mean that systems thinking has failed". The Systems Thinking movement, being focused on problem solving inspired most of the useful (and used) approaches and methods. It’s less ‘pure’ but by being grounded in problem solving, it has more value. Google "General Systems Theory" and you get few hits of relevance and only one or two books written in the last 25 years. Try "Systems Thinking" and you get lots (although there's a certain amount of bandwagon jumping perhaps). Seems to me that GST has been left behind. I don't like GST because it is so vague. It is impossible to prove or disprove the theories, so being a sceptical sort, mostly it says nothing useful that I can trust.

When I say systems Thinking I mean Systems Thinking and Soft Systems Methodology and not General Systems Research and Systems Inquiry

— Paul Gerrard (@paul_gerrard) December 22, 2012

 



Tags: #Top10Books

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account

First published 18/07/2011

Got your attention? I'm triggered into posting what follows because Rob Lambert has published an 'ebook' The Problems of Testing. Now it reminded me of a blog I wrote 18 months or so ago – but never published. I never pressed the 'publish' button because I thought it was rather too negative. But Rob's frustration is shared and here's my take on it. I don't agree with all he says, but I think there's a certain amount of complacency, self-delusion and over capacity in our business. Here what I wrote ... I was going to call this post “Do you engage your brain before you test?” but as I wrote it, I got more and more upset by the state of our discipline. (I can hardly bring myself to call it a discipline any more). It seems to me that the major 'trends' occurring in our discipline are being promoted and adopted at a terrifying rate. I'm not worried about the rate of change, or being left behind. I am worried that the people who are being sold these approaches and implementing them are taking on so-called 'solutions' without understanding what the underlying problem is, what they are trying to achieve, or how to evaluate 'success'. People have no time and no frame of reference in which to think or consider the pros and cons of available courses of action. Often, they don't even know who they are testing for. For example:

  • tools are used to automate tests that have unknown or limited value
  • 'crowds' might help us test but what is the objective, where is the control or accountability?
  • test design techniques are promoted as 'good practice' without people understanding the concept of test models, their value and limitations
  • coverage is discussed endlessly without any understanding of it's meaning, subjectivity and interpretation
  • the language and terminology we use is riddled with duplication, inconsistency and ambiguity
  • outsourcing/offshoring are promoted as being effective and economic - without any discussion of its value
  • the exploratory/ad-hoc v planned/documented testing debate is a yah-boo-sucks schoolyard shouting match - how is an interested observer to understand the issues?
  • most debate is about software testing, but we test systems, don't we? Why don't we adopt a Systems Approach?
  • certification schemes are embedding many of these flawed ideas and are strangling competitive, viable and often better alternatives.

I could go on. If I suggested that 50% of the testers currently in our business shouldn't be – how would you argue against that? Why are YOU in this business? For example – can you string two words together? One of the reasons I wrote the TESTER'S POCKETBOOK was to drill down to the fundamentals that underpin all testing (if I could find them). It's been a struggle, but I've come up with some and they are useful. I call these fundamentals TEST AXIOMS and you can see them here. They make sense to me as a starting point for discussion on testing and test practices, but they also underpin much of the thinking about test strategy and improvement. Ask them of yourself or your organisation or the next person you interview. They work for me. Am I too pessimistic about the state of our industry? Is anything going well out there? If/as/when the economies of our various countries squeezes testers out of businesses – will you still be in a job? How will you justify your role? Developers, analysts and users combined can do most of what you do. Can you do their job? Who is indispensable now? Think about it.



Tags: #testing #testaxioms

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account

First published 03/12/2009

The V-model promotes the idea that the dynamic test stages (on the right hand side of the model) use the documentation identified on the left hand side as baselines for testing. The V-Model further promotes the notion of early test preparation.


Figure 4.1 The V-Model of testing.

Early test preparation finds faults in baselines and is an effective way of detecting faults early. This approach is fine in principle and the early test preparation approach is always effective. However, there are two problems with the V-Model as normally presented.


The V-Model with early test preparation.

Firstly, in our experience, there is rarely a perfect, one-to-one relationship between the documents on the left hand side and the test activities on the right. For example, functional specifications don’t usually provide enough information for a system test. System tests must often take account of some aspects of the business requirements as well as physical design issues for example. System testing usually draws on several sources of requirements information to be thoroughly planned.

Secondly, and more important, the V-Model has little to say about static testing at all. The V-Model treats testing as a “back-door” activity on the right hand side of the model. There is no mention of the potentially greater value and effectiveness of static tests such as reviews, inspections, static code analysis and so on. This is a major omission and the V-Model does not support the broader view of testing as a constantly prominent activity throughout the development lifecycle.


The W-Model of testing.

Paul Herzlich introduced the W-Model approach in 1993. The W-Model  attempts to address shortcomings in the V-Model. Rather than focus on specific dynamic test stages, as the V-Model does, the W-Model focuses on the development products themselves. Essentially, every development activity that produces a work product is “shadowed” by a test activity. The purpose of the test activity specifically is to determine whether the objectives of a development activity have been met and the deliverable meets its requirements. In its most generic form, the W-Model presents a standard development lifecycle with every development stage mirrored by a test activity. On the left hand side, typically, the deliverables of a development activity (for example, write requirements) is accompanied by a test activity “test the requirements” and so on. If your organization has a different set of development stages, then the W-Model is easily adjusted to your situation. The important thing is this: the W-Model of testing focuses specifically on the product risks of concern at the point where testing can be most effective.


The W-Model and static test techniques.

If we focus on the static test techniques, you can see that there is a wide range of techniques available for evaluating the products of the left hand side. Inspections, reviews, walkthroughs, static analysis, requirements animation as well as early test case preparation can all be used.


The W-Model and dynamic test techniques.

If we consider the dynamic test techniques you can see that there is also a wide range of techniques available for evaluating executable software and systems. The traditional unit, integration, system and acceptance tests can make use of the functional test design and measurement techniques as well as the non-functional test techniques that are all available for use to address specific test objectives.

The W-Model removes the rather artificial constraint of having the same number of dynamic test stages as development stages. If there are five development stages concerned with the definition, design and construction of code in your project, it might be sensible to have only three stages of dynamic testing only. Component, system and acceptance testing might fit your normal way of working. The test objectives for the whole project would be distributed across three stages, not five. There may be practical reasons for doing this and the decision is based on an evaluation of product risks and how best to address them. The W-Model does not enforce a project “symmetry” that does not (or cannot) exist in reality. The W-model does not impose any rule that later dynamic tests must be based on documents created in specific stages (although earlier documentation products are nearly always used as baselines for dynamic testing). More recently, the Unified Modeling Language (UML) described in Booch, Rumbaugh and Jacobsen’s book [5] and the methodologies based on it, namely the Unified Software Process and the Rational Unified Process™ (described in [6-7]) have emerged in importance. In projects using these methods, requirements and designs might be documented in multiple models so system testing might be based on several of these models (spread over several documents).

We  use the W-Model in test strategy as follows. Having identified the specific risks of concern, we specify the products that need to be tested; we then select test techniques (static reviews or dynamic test stages) to be used on those products to address the risks; we then schedule test activities as close as practicable to the development activity that generated the products to be tested.
 



Tags: #w-model

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account