Paul Gerrard

My experiences in the Test Engineering business; opinions, definitions and occasional polemics.

First published 29/06/2015



Tags: #BDD #SP.QA #RobotFramework #ATDD

Paul Gerrard My linkedin profile is here

First published 10/06/2014

At the Unicom NextGen Testing show on 26th June, (http://www.next-generation-testing.com/) I'll be pitching some ideas for where the testing world is going – in around 15 minutes. I thought I'd lay the groundwork for that short talk with a blog that sets out what I've been working on for the past few months. These things might not all be new in the industry, but I think they will become increasingly important to the testing community.

There are four areas I've been working on in between travelling, conferences and teaching.

Testers and Programming

I've been promoting the idea of testers learning to write code (or at least to become more technical) for some time. In February this year, I wrote an article for the testing Planet: 'The Testers and Coding Debate: Can We Move on Now?' That suggested we 'move on' and those testers who want to should find learning code an advantage. It stirred up a lively debate so it seems that the debate is not yet over. No one is suggesting that learning how to write code should be compulsory, and no one is suggesting that testers become programmers.

My argument is this: for the investment of time and effort required, learning how to write some simple code in some language will give you a skill that you might be able to use to write your own tools, but more importantly, the confidence and vocabulary to have more insightful discussions with developers. Oh, and by the way, it will probably make you a better tester because you will have some insider knowledge on how programmers work (although some seem to disagree with that statement).

Anyway, I have taken the notion further and proposed a roadmap or framework for a programming training course for testers. Check this out: http://gerrardconsulting.com/?q=node/642

Lean Python

Now, my intention all along in the testers and programming debate was to see if I could create a Python programming course that would be of value for testers. I've been a Python programmer for about five years and believe that it really is the best language I've used for development and for testing. So, I discussed with my Tieturi friends in Finland, the possibility of running such a course in Helsinki and I eventually ran it in May.

In creating the materials, I initially thought I'd crank out a ton of powerpoint and some sample Python code and walk the class through examples. But I changed tack almost immediately. I decided to write a Python programming primer in the Pocketbook format and extract content from the book to create the course. I'd be left with a course and a book (that I could give away) to support it. But then almost immediately, I realised two things:

  • Firstly, it was obvious that to write such a short book, I'd have to ruthlessly de-scope much of the language and standard functions, libraries etc.
  • Second – it appeared that in all the Python programming I've done over the last five years, I only ever used a limited sub-set of the language anyway. Result!
And so, I only wrote about the features of the language that I had direct experience.

I have called this Lean Python and the link to the book website is here: http://www.apress.com/gb/book/9781484223840#otherversion=9781484223857

“Big Data” and Testing the Internet of Everything

Last year, I was very interested in the notion of Big Data and the impact it might have on testing and testers. I put together a webinar titled Big Data: What is it and why all the fuss? which you might find interesting. In the webinar, I mentioned something called Test Analytics. I got quite a few requests to explain more about this idea, so I wrote a longer article for Professional Tester Magazine to explain it. you can go to the PT website or you can download the article “Thinking Big: Introducing Test analytics” directly from here.

Now, it quickly occurred to me that I really did not know where all this Big Data was coming from. There were hints from here and there, but it subsequently became apparent that the real tidal wave that is coming is the Internet of Things (also modestly known as the Internet of Everything)

So I started looking into IoT and IoE and how we might possibly test it. I have just completed the second article in a series on Testing the Internet of Everything for the Tea Time with Testers magazine. In parallel with each article, I'm presenting a webinar to tell the story behind each article.

In the articles, I'm exploring what the IoT and IoE are and what we need to start thinking about. I approach this from the point of view of a society that embraces the technology. Then look more closely at the risks we face and finally how we as the IT community in general and the testing community in particular should respond. I'm hopeful that I'll get some kinf of IoE Test Strategy framework out of the exercise.

The first article in the series appears in the March-April edition of the magazine (downloadable here) and is titled, “The Internet of Everything – What is it and how will it affect you”.

There is a video of an accompanying webinar here: The Internet of Everything – What is it and how will it affect you.

A New Model of Testing

Over the past four years, since the 'testing is dead' meme, I've been saying that we need to rethink and re-distribute testing. Talks such as “Will the Test Leaders Stand Up?” are a call to arms. How to Eliminate Manual Feature Checking describes how we can perhaps eliminate, through Redistributed testing, the repetitive, boring and less effective manual feature checking.

It seems like the software development business is changing. It is 'Shifting Left' but this change is not being led by testers. The DevOps, Continuous Delivery, Behaviour-Driven Development advocates are winning their battles and testers may be left out in the cold.

Because the shift-left movement is gathering momentum, Big Data and the Internet of Everything are on the way, I now believe that we need a New Model of Testing. I'm working on this right now. I have presented draftsof the model to audiences in the UK, Finland, Poland and Romainia and the feedback has been really positive.

You can see a rather lengthy introduction to the idea on the EuroSTAR website here. The article is titled: The Pleasure of Exploring, Developing and Testing. I hope you find it interesting and useful. I will publish a blog with the New Model for Testing soon. Watch this space.

That's what's new intesting for me. What's new for you?

Tags: #Python #IOE #IOT #InternetofEverything #programmingfortesters #InternetofThings #NewModelTesting

Paul Gerrard My linkedin profile is here

First published 10/04/2014

A question from Amanda in Louiville, Kentucky USA.

“What's the acceptable involvement of a QA analyst in the requirements process?  Is it acceptable to communicate with users or should the QA analyst work exclusively with the business team when interpreting requirements and filling gaps?

As testers, we sometimes must make dreaded assumptions and it often helps to have an awareness of the users' experiences and expectations.”

“Interesting question, Amanda. Firstly, I want to park the ‘acceptable’ part of your question. I’ll come back to it, I promise.

Let me suggest firstly, that collaboration and consensus between users, BAs, developers and testers is helpful in almost all circumstances. You may have heard the phrase ‘three amigos’ in Agile circles to describe user/BA, developer and tester collaboration. What Agile has reminded us of most strongly is that regular and rapid feedback is what keeps momentum going in knowledge based (i.e. software development) projects.

In collaborative teams, knowledge gets shared fast and ‘dreaded assumptions’ don’t turn into disasters. I can think of no circumstance where a tester should not be allowed to ask awkward questions relating to requirements like ‘did you really mean this...?’, ‘what happens if...?’, ‘Can you explain this anomaly?’, ‘If I assume this..., am I correct?’. Mostly, these questions can be prefaced with another.

‘Can I ask a stupid question?’ reduces the chance of a defensive or negative response. You get the idea, I’m sure.

Where there is uncertainty, people make assumptions unless they are encouraged to ask questions and challenge other people’s thinking – to get to the bottom of problems. If you (as a tester) make assumptions, it’s likely that your developers will too (and different assumptions, for sure). Needless to say the users, all along, may be assuming something entirely different. Assume makes an ass of u and me (heard that one before?)

So – collaboration is a very positive thing.

Now, you ask whether it is ‘acceptable’ for testers to talk direct to users. When might it not be “acceptable”? I can think of two situations at least. (There are probably more).

One would be where you as a tester work for a system supplier and the users and BAs work for your customer. Potentially, because of commercial/contractual constraints you might not be allowed to communicate directly. There is a risk (on both sides) that a private agreement between people who work for the supplier and customer might undermine or conflict with an existing contract. The formal channels of communication must be followed. It is a less efficient way of working, but sometimes you just have to abide with commercial rules. Large, government or high-integrity projects often follow this pattern.

Another situation may be this. The BA perceives their role to be the interface between end users and a software project team. No one is allowed to talk direct to users because private agreements can cause mayhem if only some parties are aware of them. The BA is accountable to users and the rest of the project team for changes to requirements. There may be good reasons for this, but if you all work for the same organisation what doesn’t help is a ‘middle man’ who adds no value but distorts (unknowingly, accidentally or deliberately) the question from a tester and the response from a user.

Now, a good (IMHO) BA would see it as perfectly natural to allow testers (and other project participants) to ask questions of users directly, but it is also reasonable for them to be present, to assess consequences, to facilitate discussion, to capture changed requirements and disseminate them. That’s pretty much their job. A tester asking awkward questions is teasing out value and reducing uncertainty – a good thing. Who would argue with that?

But some BAs feel they ‘own’ the relationship with users. They get terribly precious about it and feel threatened and get defensive if other people intervene. In this case, the ‘not acceptable’ situation arises. I have to say, this situation reflects a rather dysfunctional relationship, not a good one. It isn’t helpful, puts barriers in the way of collaboration, introduces noise and error into the flow of information, causes delays and causes uncertainty. All together a very bad thing!

Having said all that, with this rather long reply, I’ve overran some quota or other, I’m sure. The questions I would ask, ‘unacceptable to whom?’ and ‘why?’ Are BAs defending a sensible arrangement or are they being a pain in the assumption?”



Tags: #FAQ #BusinessAnalysis

Paul Gerrard My linkedin profile is here

First published 10/06/2014

Should Testers Learn How to Write Code?

In my previous article, ‘The testers and coding debate: Can we move on now?’ [1], I suggested that that:

  • Tester programming skills are helpful in some situations and having those skills would make a tester more productive
  • It doesn’t make sense to mandate these skills unless your organization is moving to a new way of working, e.g. shift-left
  • Tester programming skills rarely need to be as comprehensive as a professional programmer’s
  • A tester-programming training syllabus should map to required capabilities and include code-design and automated checking methods.

We should move on from the ‘debate’ and start thinking more seriously about appropriate development approaches for testers who need and want more technical capabilities.

How should testers acquire coding skills?

As I also said in the original article [1], I am not an expert in the teaching of programming languages or computer science in general. I have not studied the many writings of experts in this field who, needless to say, tend to be academics. But I am trusting an international team of academics who have [2]. They reviewed the research literature in this area (101 sources were cited). Their concluding remarks include this:

“We conclude that despite the large volume of literature in this area, there is little systematic evidence to support any particular approach. For that reason, we have not attempted to give a canonical answer to the question of how to teach introductory programming.”

So, with the belief that I am on reasonably safe ground, I will suggest that most commercially available programming courses are inappropriate for teaching programming to software testers and that a different approach is required. The programming skills taught to testers should be driven by the need to improve capabilities and must include fundamental code design principles and automated checking methods.

In the remainder of this article, I want to propose an approach to teaching non- or partly-technical testers enough programming skills to support broader testing-related capabilities.

Don’t think skills, think capabilities

The polarising question, ‘to code or not to code’ is unhelpful. It makes much more sense to ask about capabilities. Let us ask, ‘what software-related capabilities does a tester need to do their job?’ Well, of course, it depends on the job. Here is a non-definitive list of tester capabilities that some testers (and their teams) might find useful in different situations (prefix each with ‘ability to…’):

a)      Read and understand the flow of control of code, fundamentals of algorithms

b)      Construct, execute and interpret tests using a unit test framework (in a TDD, BDD context)

c)      Construct, execute and interpret tests using a GUI test tool (in a system testing context)

d)      Challenge and discuss with developers the coverage, value, thoroughness of unit tests

e)      Write simple utilities to extract, filter, sort, merge, generate test data from a structured database and non-structured XML, HTML, CSV, plain text sources

f)       Use or adapt existing libraries to construct tests of networked devices, web servers, services, mail, FTP, terminal servers

… and so on.

I should explain why I used the term capability, rather than competency. In some circles, competency implies having qualifications, being certified etc. So to avoid being drawn into that debate, I’ll use the word capability from now on. However, the word capability is no less of a challenge. The SEI’s Capability Maturity Model (Integrated) [3] is also a powerful and polarising force. CMMI is a certification industry in its own right. Even so, capability is the most appropriate term for current purposes.

Mapping technical skills to new capabilities

The capabilities (or team/project requirements) above will be familiar to most testers. Usually, non-technical testers will consult their development colleagues, or bring in a test automation specialist or tool-smith to help out. Alternatively, they might have to acquire the skills to do the technical tasks themselves. What technical skills are required for each capability above?

  1. All of these capabilities require a fundamental understanding of how code is written and experience of writing code in any language
  2. Capabilities b-f require skills in a selected language
  3. Capability b requires knowledge of one unit test framework
  4. Capability c requires knowledge of a GUI test tool or library
  5. Capability d requires a deeper insight into code construction and the significance of coverage
  6. Capability e requires knowledge of standard libraries (to access DBs, parse XML, HTML, to use regular expressions)
  7. Capability f requires knowledge of libraries that require additional technical or architectural knowledge (to drive network traffic or drive HTTP/web, mail servers etc.)

Several attempts at programming (or software engineering) competency levels have been published. One commonly referred to is here [4] but it focuses on aspects of programming, systems architecture, software engineering skills. It looks more like a software engineering syllabus than a competency roadmap. For now, let’s park the idea of a competency or maturity scheme.

Gaining capabilities

Now, the capabilities above are not in any particular order. They might be required in a variety of contexts and the skills profile for each capability differs. Although the fundamental knowledge required to write code can be taught, the way you become more proficient is to take on increasingly ambitious assignments that require deeper knowledge. It is your ambition and willingness to learn new things that determine how far you progress as a programmer. Let me illustrate this with reference to the libraries you might need to understand to build some functionality.

In one respect, it’s just the libraries you use that varies, but as you tackle more advanced problems the skills you need become more sophisticated. Here’s one way of looking at the levels of capability. Read as ‘ability to do/use…’:

  1. Basic programming that requires no special library support
  2. Libraries that form part of the core language
  3. Libraries mentioned in language cook-books, for example, that support activities outside the core language e.g. more advanced algorithms, messaging, integration with social media etc.
  4. Research, evaluation, selection and use of libraries that are less well known, not well documented perhaps, not mentioned in books etc. e.g. data analysis, real-time, leading edge products (everything was leading edge at some time) etc.
  5. Creation of your own generic libraries, where no usable library yet exists.

It is reasonable to expect that a junior programmer should be capable of level 3, an experienced programmer level 4. A programmer needs the programming language skills but also the ability and confidence to make their own choices. Testers need to understand how the progression from beginner to gain confidence and some independence (what I’ve characterised as level 3 above).

What and how do testers need to learn?

All this talk of capabilities is fine but the questions a tester needs answering are:

  • I need a capability. What skills do I need?
  • How do I acquire them?

I am going to suggest that firstly, the motivation to learn is the need to acquire some tool or utility to improve your capability and help you in your job. It is all very well learning to write code by rote to code some dummy programs that are of little value outside the class. But if, at the end of the course, you are left with some working code that has value, you have the beginnings of a toolkit that enhances your capability and that you can evolve, improve and share.

The best way to learn a particular aspect of programming is to have working examples that you write yourself or samples that you are provided with, that you can adapt and improve to suit your own purposes. Of course, in a training course, some example code that performs a useful function has to be limited in scope to allow it to be usable for teaching. It might also have to be a suboptimal solution to the problem at hand. For example, it might not use the most sophisticated libraries or the most efficient algorithm etc. Inevitably, the offered solution to a set problem can only ever be one of many possible. Other real-world factors outside the classroom might determine what the ultimate solution should look like.

Of course, you also need a safe and assistive programming and testing environment and a good teacher.

Existing courses will not do

Almost all commercial programming training courses aim to teach the basics of a selected programming language. They focus almost entirely on the syntax of the language so what you learn is how to use the fundamental language constructs. Now, this is fine if you are already a programmer or you want to recognize the elements of code when you see them. But they don’t prepare you very well if you are a beginner and want to perform a real-world task:

  • They don’t teach you much about good design or good (and bad) programming practices or styles
  • If the language is object-oriented, it’s unlikely you will know how object-oriented design works (even if you know how to create and use objects)
  • You will learn a lot of language elements that you might never use, or not need for some time to come
  • Testing your programs is almost certainly not given much attention in the syllabus
  • You may not have written any programs that are useful.

Programming classes for testers need to address all these aspects.

A roadmap for learning

What I’m proposing here is a set of design principles for programming training for testers and I’ve suggested one possible course structure that has a specific set of capabilities in mind in the Appendix.

Pragmatic course design principles

A set of course design principles explain the thinking behind the Road Map and the content of specific courses. The word ‘client’ refers to the tester or the employer of the testers to be trained.

Task-Oriented

The course will specifically aim to teach people through example and practice to write code to perform specific tasks. The aim is not to learn every aspect of the selected language.

Pragmatic, Not Perfect

The course exercises, examples and take-away solutions will not be perfect. They may be less efficient, duplicate code available in pre-existing libraries, they might not use objects – but they will work.

All Courses Begin with a Foundation Module

The first, foundation module will cover the most fundamental aspects of program design and process and the most basic language elements. All subsequent modules depend on the Foundation module.

Content Driven by Demand for Capabilities

The course topics taught (the philosophy, programming language elements) will be sufficient to meet the requirements of the exercises that meet the requirements of a capability.

A Language Learning Process is Part of the Course

Because the course will not cover all language elements or available libraries, there will be gaps in knowledge. The course must allow you to practice how to find the right language construct, useful libraries and suitable code samples from other (usually online) source so you are not lost when you encounter a new requirement.

A Scripting Language Should be the Default

Where the client has no preference for a particular language as the subject for the course, a scripting language (Python, Ruby, Perl etc.) would be selected. The initial goal must be to learn the fundamentals of writing software in an easy-to use language. Popular languages like C++, C# and Java have a steeper learning curve. However, if your developers write all software in Java, for example – it is reasonable for the testers to learn Java too (although Java might not be the easiest ‘first language’ to learn).

Integrated Development Environments (IDEs)

IDEs are valuable resources for professional programmers. However, initial course modules would use simple, language sensitive editors (Notepad++, Gedit, Emacs etc.) Use of a specific IDE would be the subject of a dedicated course module.

Open Source Wherever Possible

To avoid the complications of proprietary licensing and to facilitate skills and software portability, open source language interpreters/compilers, test frameworks and language editors should be used.

Minimal Coupling Between Course Modules

The focus of each module is to build a useful utility and for each module to be independent of others (which might not be required). Of course, for modules to be independent, some topics will be duplicated across modules. Courses must allow the trainer to skip (or review) the topics already covered to avoid covering the same topics twice in different modules.

And with that long list of principles, let’s sketch out how the structure of a course might look.

Course structure

The Structure will comprise a set of modules including an initial Foundation module and other modules that each focus on building a useful utility. Appendix A presents a possible course consisting of four modules that will give students enough knowledge to write and use:

  1. Foundations of programming in the selected language
  2. A text file searching and pattern matching utility
  3. A simple HTTP Website driver
  4. A simple Web services driver.

This four-module course is just an example. Although the Foundation Module is quite generic and should give all students a basis to write simple programs and make progress, it could be enhanced to include more advanced topics such as Regular Expressions or XML Processing or Web site messaging, for example.

The overall theme of the example course is the testing of web sites and web services and manipulation of the data retrieved. Obviously, courses with a different focus are likely to be required. For example:

  • Database: Manipulation of data in relational technologies to manage, extract, generate, validate data (this would require a primer in relational databases and SQL)
  • Unit Testing: Use of a Unit test framework to test classes, components and systems
  • GUI Test Automation: Use of a proprietary or open source test execution tool or framework to drive desktop, web or mobile applications.
  • Data Analysis: Use of data analysis libraries (In the case of Python these could be NumPy, Pandas, Matplotlib etc.)
  • Big Data: Manipulation of data in NoSQL (not only SQL) technologies to manage, extract, generate, validate data
  • … and so on

Of course, the precise content of such courses would need to be discussed with the client to ensure the course provider can meet their requirements.

Summary

In this paper, I have tried to justify the situations where it would be appropriate for testers to learn the basics of a programming language and suggest that programming knowledge, even if superficial, could benefit the tester.

Existing, commercial programming courses tend to be focused on teaching the entirety of a language and tend to be aimed at experienced programmers wishing to learn a new language. Testers need a differently focused course that will teach them a valuable subset of the technology that provides specific capabilities relevant to their job. Programming courses for testers need to be designed to support this somewhat different requirement.

A set of guiding principles for creating such courses is presented and a course description, for testing web sites and services through APIs, offered as an example. Other, differently focused courses would obviously be required.

This is a work in progress and comments, criticisms and suggestions for improvement are welcomed. We are interested to work with other organizations with specialisms in different technologies or business domains to refine this approach. We are also seeking to work with potential clients to develop useful course materials.

References

1.Paul Gerrard, The testers and coding debate: Can we move on now?

2.A Survey of Literature on the Teaching of Introductory Programming, Arnold Pears et al.

3.CMMI Institute, http://cmmiinstitute.com/

4.Sijin Joseph, Programmer Competency Matrix



Tags: #CodingforTesters #Coding #roadmap

Paul Gerrard My linkedin profile is here

First published 21/04/2022

Background

Michael Bolton recently posted a message on LinkedIn as follows https://www.linkedin.com/posts/michael-bolton-08847_low-code-testing-tools-are-really-low-testing-activity-6921751556463685633-pcQS?utm_source=linkedin_share&utm_medium=member_desktop_web:

“Low-code testing tools” are really “low-testing code tools”

In what follows I infer no criticism of Michael or the people who responded to the post, whether positive or negative.

I want to use Michael’s proposition to explain why we need to be much more careful about:

  • How we interpret what other people say or write
  • How we assess its clarity, credibility, logical consistency or truthfulness
  • Whether we understand the assumptions, experience or agenda of the author, or of ourselves as the audience
  • And so on…

This is a critical thinking exploration of a phrase of a ten-word, five-term sentence.

I’ve been reading a useful book recently, “The Socratic Way of Questioning” – written by Michael Britton and published by Thinknetic. Thinknetic publish quite a few books in the critical thinking domain and you can see these here:   https://www.amazon.co.uk/Thinknetic/e/B091TWBHXN/

I’ll use a few ideas I gleaned from the book, describe their relevance in a semi-serious analysis of Michaels’ proposition and hopefully, illustrate some aspects of Critical Thinking – which is the real topic of this article. I don’t claim to be an expert in the topic. But I’m a decent reviewer and can posit a reasonable argument. I’ll try to share my thought process concerning the proposition and take a few detours on the overall journey to expand on a few principles.

I’ve no doubt that some of you could do a better job than I have.

The importance of agreed definitions

The first consideration is that of definition. How can you have a meaningful discussion without having agreed meanings of the words used in that discussion? Socrates himself is said to spend at least half of his time in argument trying to get consensus on the meanings of words. Plato’s Republic, the most famous Socratic dialogue spends most of its time defining just one term – justice.

Turning towards the proposition, what do the terms used actually mean? Do they mean the same thing to you as they mean to Michael? Is there an agreed, perhaps universally agreed definition of these terms? So, what do the following terms mean?

  • Code
  • Testing
  • Tools
  • Low-Code
  • Low-Testing

Now, in exploring the potential or actual meanings of these terms, I’ll have to use some, perhaps many terms that I won’t define. Defining them precisely in a short article is inevitably going to get very cumbersome – so I won’t do that. Of course, we need an agreed glossary of terms and their definition as the basis of the definitions at hand. But it’s almost a never-ending circle of definitions. The Oxford English Dictionary, when I last looked, has compiled over 600,000 definitions of words and phrases covering “1000 years of English” (https://public.oed.com/history/).

Recently, I have been doing some research with ‘merged’ regular English dictionaries and some of the testing-related glossaries available on the internet. I now have some Python utilities using open-source libraries to scrape web pages and PDF documents and scan them for terms defined in these sources as well as detection of phrases deemed ‘meaningful’. (What meaningful means is not necessarily obvious). I'll let you know what I'm up to in a future post.

We have a real problem in the language used in our product and service marketing, and sadly our exchanges on testing topics in discussions/debates.

Definitions

Here goes with my definitions and sources, where I could identify and/or select a preferred source. I have tried to use what I think might be Michael’s intended definition. (I am surely wrong, but here goes anyway).

Code: generally defined as a programming or script language used to create functional software to: control devices; capture, store, manipulate and present data; provide services to other systems or to humans. In this context, we’re thinking of code that controls the execution of some software under test.

Testing: …is the process of evaluating a product by learning about it through experiencing, exploring, and experimenting, which includes to some degree: questioning, study, modelling, observation, inference, etc. (Source: https://www.satisfice.com/blog/archives/856 – there’s a security vulnerability on the page by the way – so use an incognito window if your browser blocks it).

Tools: Oxford English Dictionary: a device or implement, especially one held in the hand, used to carry out a particular function. Merriam Webster: something (such as an instrument or apparatus) used in performing an operation or necessary in the practice of a vocation or profession. In our context – software test execution automation – we refer to software programs that can: stimulate a system under test (SUT); send inputs to and receive outputs from the SUT; record specific behaviours of the SUT; compare received outputs or recorded behaviours with prepared results. And so on.

Low-Code: There is no agreed definition. Here is an example relating to software-building which is reasonable https://www.mendix.com/low-code-guide/: “Low-code is a visual approach to software development that optimizes the entire development process to accelerate delivery. With low-code, you can abstract and automate every step of the application lifecycle to streamline the deployment of a variety of solutions.” I’d substitute software test development for software development. In the context of testing, the terms ‘codeless’ is more often used. This definition appears on a tools review website (https://briananderson2209.medium.com/top-10-codeless-testing-tools-in-2020-every-tester-should-know-2cb4bd119313): “Simply put, codeless test automation means creating automated tests without the need to physically write codes. It provides testers lacking sophisticated programming skills with project templates for workflow, element libraries, and interface customization.”

Low-testing: I’m pretty certain Michael has invented this term without defining it. But we can guess, I think. There are two alternatives – either (a) the quantity of testing is lower (than what?). We don’t know. But let’s assume it refers to ‘enough’... or (b) the quality of testing is lower than it should be (by Michael’s standard, or someone else’s – we don’t know). I suspect (b) is the intended interpretation.

Interpreting the Proposition

Since we now have some working definitions of the individual terms, we can expand the proposition into a sentence that might be more explicit. We’ll have to document our assumptions. This carries a risk: have we ‘guessed’ and misunderstood Michael’s purpose, thinking, assumptions and intended meaning?

Anyway. Here goes:

“Using test execution tools that require less (or no) coding means the quality (or quantity, possibly) of your testing will be reduced.”

Here are our assumptions in performing this expansion:

  • The ‘low-testing’ phrase implies the lower quality or quantity testing is achieved. (Does it matter which? It would probably help to know, of course).
  • Only using such tools causes this loss of quality or quantity. Does having such tools, but not using them, not?
  • Having knowledge of some of Michael’s previous writings about tools and testing justifies our (still ambiguous) expanded phrase, I believe.

Ambiguities

We have several ambiguities in our reformulation of the original proposition:

  • Do low-code tools have the effect of lowering the quality/quantity of testing or do ‘high-code’ – that is, all tools – do this?
  • Do tools lower the quantity, or do they lower the quality of testing? (Or both?)
  • Lowered compared to what? Testing without tools? (Or see below… tools with ‘higher-code’)
  • What is meant by ‘the quality of testing’? How is that defined? How could it be measured? Evaluated?
  • The quantity of testing might be reduced – but might this be a good thing, if the quality is the same or improved?
  • There’s no mention of the benefits of using (low-code) tools – but is the overall effect of using tools (low or high code) detrimental or beneficial?
  • There is no mention of context here. Is the proposition true for all contexts or just some? Could using such tools possibly be a universally best practice? Or a universally bad one?

Why This Proposition?

Is this proposition making a serious point? Does it inform a wider audience? I don’t think so. It’s clickbait. Some people will (perhaps angrily) agree and some (angrily) disagree. These people will probably be driven by preconceptions and existing beliefs. Do ‘thinking people’ respond to such invitations?

It’s a common tactic on internet forums and I’ve used the tactic myself from time to time. Although I think it’s a lazy way of starting a debate because the ‘thoughts from abroad/musings/worldview/the toilet’ (strike out what doesn’t apply) – tend not to trigger debate, they just cause some folk to extol/confirm their existing beliefs.

So, don’t ask me, ask Michael.

Summary

I’ve written over a thousand words so far, analysing a ten-word, five-term proposition. I’m sure there is much more I could have written if I thought more deeply about the topic. A word-ratio of 100 to 1 isn’t surprising. Many books have been written discussing less complicated but more significant propositions. Love, justice, quality, democracy… you know what I mean.

The lack of agreed definitions of the terms we use can be a real stumbling block, leading to misinterpretations and misconceptions hardly likely to improve our communications, consensus and understanding. Even with documented definitions there are problems if people dispute them.

Before you engage in a troublesome conversation of a proposition, ask for definitions of the terms used and try, with patience, to appreciate what the other person is actually saying. You may find gold or you may demolish their proposition. Or both.

Finally

Am I suggesting you apply critical thinking to all conversations, all communications? No of course not. Life is too short. And surely, if you analyse all your partner’s words you are bound to land in scalding hot water. So, like testing and all criticism – it has its place. A weapon of choice, in its place.

Critical thinking, like testing, should be reserved for special occasions. When you really want to get to the bottom of what an author, speaker or presenter is saying or a supplier is supplying. How can you agree or disagree without understanding exactly what is being proffered? Critical thinking works.

What’s my Response to the Proposition?

“After all this circumlocution, what do you think of the proposition, Paul?”

I think there are more interesting propositions to debate.



Tags: #thinkingtools #testing #low-codetesttool #codelesstesttool #criticalthinking #clickbait #testingterminology #CriticalThinking

Paul Gerrard My linkedin profile is here

First published 06/03/2015

This blog first appeared on the EuroSTAR blog in April 2014 (http://conference.eurostarsoftwaretesting.com/2014/courage-and-ambition-in-teaching-and-learning/)

In 2013, Cem Kaner (http://kaner.com) asked me to review a draft of the ‘Domain Testing Workbook’ (DTW) written by Cem in partnership with Sowmya Padmanabhan and Douglas Hoffman. I was happy to oblige and pleased to see the book published in October 2013. At 480 pages, it’s a substantial work and I strongly recommend it. I want to share some ideas we exchanged at the time and these relate to the ‘Transfer Problem’.

In academic circles, the Transfer of Learning relates to how a student applies the knowledge they gain in class to different situations or the real-world. In the preface of DTW, the transfer problem is discussed. Sowmya and Cem relate some observations of student performance in a final exam which contained:

  • Questions that were similar to those the students had already experienced in class and homework assignments and
  • One question that required students to combine skills and knowledge in a way that was mentioned in lectures, but they had not practiced.
Almost every student handled the first type of question very well, but every student failed the more challenging question. It appears that the students were able to apply their knowledge in familiar situations, but not in an unfamiliar one. The transfer problem has been studied by academics and is a serious problem in the teaching of science in particular, but it also seems to exist in the teaching of software testing.

The ‘Transfer of Learning’ challenge is an interesting and familiar topic to me.

Like many people, in my final school year, I sat A-Level examinations. And I got to know that taking the CISA exam after A-levels wasn't my cup of tea. In my chosen subjects – Mathematics, Physics and Chemistry – the questions in exams tended to focus on ‘point topics’ lifted directly from the syllabus. The questions were similar to the practice questions on previous exams. But I also sat Scholarship or S-Level exams in maths and physics. In these exams, the questions were somewhat harder because they tended to merge two or even more syllabus concepts into one problem. These were clearly harder to answer and required more imagination and I’m tempted to say, courage, in some respects. I recall a simple example (it sticks in my mind – exam questions have a tendency to do that, don’t they?)

Now the student would be familiar with the modulus of a number |X| being the absolute or positive value. X can be a positive or negative number. And the familiar ax2 + bx + c = 0 quadratic equation that can be often solved by trial and error but always solved using the quadratic formula. The quadratic with a modulus would not be familiar, however. So this problem demands a little more care. I leave it to you to solve.

Now, this ‘harder question style’ was (and still is) used by some universities in their own entrance exam papers. I sat university entrance exams which were exclusively of this pattern. Whether it is an effective discriminator of talent – I don’t know – but I got through them, thank goodness.

But my experience with testers not being able to transfer skills to real world or more complex contexts is a manifestation of the ‘transfer problem’. It seems to me that it’s not lack of intellect that causes people to struggle with problems ‘out of a familiar context’ but I’d like to consider two attitudes to teaching and learning that we should encourage – courage and ambition. For the first, I will draw a parallel with sports coaching.

Most years, I coach rowing at my local club. In rowing and in particular sculling, if a sculler makes a mistake, they can capsize the boat, fall into the river and get rather wet, so there’s a risk and the risk makes people unwilling to commit to the correct rowing technique. Correct technique demands that firstly, the rower gets their blades off the water which leaves the rower in a very vulnerable, unstable situation. They have to learn how to balance a boat before they can move a boat quickly so they must be confident first, skilled second and then they can actually apply their power to make the boat move quickly.

It’s a bit like taking the stabilisers (training wheels) off a pushbike – it takes some confidence and skill for a beginner rider to do that. Coaching and learning tightrope walking, skiing, climbing and gymnastics are all similar.

Athlete coaching technique involves asking athletes to have courage, to trust their equipment, the correct technique and the laws of physics and to not fear the water or a fall in the snow. In fact, coaches almost force people to fail so they recognise that failure doesn’t hurt so much and in fact, they can commit knowing the consequence of failure is not so bad after all.

I remember many years ago when I was learning to ski in a class of ten people – at one point on a new slope, the whole class was having difficulty. So the ski instructor took us to an ‘easier slope’. We struggled there too, but made some progress. Then we went back to the first slope. Remarkably, everyone could ski down the first slope with ease. In fact, the ski instructor had lied to us – he took us to a harder slope to ‘go back to basics’. It turned out that it was confidence that we lacked, not the skill.

Getting people to recognise that the risk isn’t so bad, to place trust in things they know, to have courage to try and keep trying can’t be learnt from a book or online course. It takes practice in the real world, perhaps in some form of apprenticeship and with coaching, not just teaching, support. Coaches must strongly challenge the people they coach, continuously.

The best that a book can do is present the student (and teacher) some harder problems like this with worked examples. If we expect the student to fail, we should still set them this kind of problem, but then the teacher/coach has to walk through the solution, pointing out carefully, that it’s not just allowed, but that it really is essential to ‘think outside the box/core syllabus’. Perhaps even to trust their hunches.

Coaches/trainers and testers both need courage.

The test design techniques are often taught as rote-procedures whereby one learns to identify a coverage item (a boundary, a state-transition, a decision in code) and then derive test cases to cover those items until 100% coverage is achieved. There is nothing wrong with knowing these techniques, but they always seem to be taught out of context. Practice problems are based on static, unambiguous, but above all, simple requirements or code that when the student sees a real, complicated, ambiguous, unstable requirement it’s no wonder they find it hard to apply the techniques effectively – or at all.

These stock techniques are often presented as a way of preparing documentation to be used as test scripts. They aren’t taught as test models having more or less effectiveness or value for money to be selected and managed. They are taught as clerical procedures. The problem with real requirements is you need half a dozen different models on each page, on each paragraph, even.

A key aspect of exploratory testing is that you should not be constrained but should be allowed and encouraged to choose models that align with the task in hand so that they are more direct, appropriate and relevant. But the ‘freedom of model choice’ applies to all testing, not just exploratory, because at one level, all testing is exploratory. I’ve said that before as well (http://gerrardconsulting.com/index.php?q=node/588).

In future, testers need to be granted the freedom of choice of test models but for this to work, testers must hone their modelling skills. We should be teaching what test models are and how models can be derived, compared, discarded, selected and used. This is a much more ambitious goal than teaching and learning the rote-procedures that we call, rather pompously, test design techniques. I am creating a full-day workshop to explore how we use models and modelling in testing. If you are interested or have suggestions for how it should work, I’d be very interested to hear from you.

We need to be more ambitious in what we teach and learn as testers.

Tags: #teaching #learning

Paul Gerrard My linkedin profile is here

First published 03/05/2013

The nice folk at Diaz Hilterscheid have made the video of my track talk “Continuous Delivery of Long Term requirements” at Agile Testing Days 2012.

The original write up ran: "Larger, multi-stakeholder projects take time to establish consensus. Usually, no individual is able (or available) to be the on-site customer, and even if one is nominated, they are accountable to multiple stakeholders and the consultation and agreement feedback loop can take weeks or months. Now, it could be said that an organisation that takes months to make up its mind can never be Agile and they are doomed always to use structured, staged, bureaucratic approaches.

How could a company used to (and committed to) managing stakeholder expectations in a systematic way over longer timescales take advantage of Agile approaches to development and delivery? Put it another way. If a corporate has a trusted set of business rules defined in requirements, can delivery of a system to meet those requirements be achieved in an Agile, continuous way? How can larger requirements, evolved over weeks or months be channelled into teams doing continuous delivery?

It’s all about trust. The requirements that feed the beast of continuous delivery must be trusted to be mature, complete and coherent enough to deliver the business value envisaged by stakeholders. In one way or another, the requirements must be tested and trusted enough to pour into larger or collaborating Agile teams. Note: *trusted*, not perfect. This talk explores how larger-scale requirements management and testing could drive a Continuous Delivery process. It’s not ‘pure Agile’, Rather it’s lean and close to being a factory-based approach, but it might be the way to achieve Agile delivery in a non-Agile business."

 

 



Tags: #continuousdelivery #AgileTestingDays #CI #CD

Paul Gerrard My linkedin profile is here

First published 09/05/2014

I’ve been meaning to write an article on writing conference abstracts and submissions for a while. I’m prompted to do this now because I’ve had to provide feedback to some of the many EuroSTAR submitters who asked for feedback. I can’t provide an individual response to the nearly 400 people who were unsuccessful. But I can provide some generalised feedback on my experience as Programme Chair for EuroSTAR, Host of the Test Management Forum, Co-Programme Chair for Testing in Finance, several Unicom and many BCS SIGiST events and others that I can’t recall. I've also given a few talks in my time.

In particular, I want to address this to the people who are trying to get their talks accepted and who might not yet have succeeded. Personally, I like to go to conference talks by a small number of speakers I know well and respect. But I also like to go to talks from ‘first-timers’. They often have much more energy and new insights that make conference talks so very interesting.

(By the way, if you want to pitch a session at the TMF, do let me know. I’m always on the lookout. Let me know your idea first.)

So, with no further ceremony. Here are my Do’s and Don’ts of conference submissions. Mostly, they are Don’ts. I hope you find them useful.

Read the Submission Guidelines

If you ignore the advice that has been put together by the programme team, then you are simply asking for trouble. But you will make it easy for the reviewer – they will discount your submission very quickly. These are the mistakes that I would say are really basic. Perhaps they are dumb too. See later.

  • The guidelines ask for 1500-2000 characters and you write 4500 words because you think your submission is so fantastic it can’t be described in less.
  • The guidelines ask for 1500-2000 characters and you write 500 because the content simply isn’t there.
  • The guideline describes a theme and you ignore it (you know better of course)
  • The guideline asks for experience reports and you provide a polemic or it asks for evidence and you offer only speculative theories.
  • The guideline says, ‘no sales pitches’ and you give them ... a sales pitch.
  • The guideline asks for three key points or takeaways and you haven’t got any.
  • You don’t fill in the personal details form properly.

You Only Have One Chance to Make a First Impression

Whether you are writing a CV, presenting yourself for an interview, or pitching an abstract for a conference talk, if you make a bad impression in your title or first paragraph, the reviewer will do you the courtesy of reading your abstract, for sure. But in their mind, they will be looking for further evidence to confirm their first impression and score the submission low.

If your title and opening words trigger a reaction (curiosity, excitement, horror even) they will read to the end with interest but the reviewer will be looking for evidence to score your abstract highly.

Aim to get a good reaction in the first few sentences of your submission.

Be Credible – You are Selling Your Story, Not Soap

Do not promise 250x faster regression testing, a tool/process/service that will transform the industry or the lives of the audience. If you have actually made $1 million out of your idea, then perhaps people would like to hear your story. Otherwise, choose a different tack.

If you are offering a story of how you transformed something in your own company, reviewers will check your profile that a) you worked for the company b) for a reasonable length of time and c) your experience fits the story you tell. Match your story to your experience. Do not under any circumstances lie. Unless misrepresentation and being fooled is a key aspect of your talk.

If you work for a tool vendor, I’m afraid it is really hard to get your story of how wonderful your tool is into a conference. It’s much better to talk about tool classification, or implementation of tools or true experiences of using a type of tool. Unfortunately, there is also some prejudice against speakers from tool vendors, particularly the larger ones. Better that you get a client to talk about how wonderful you are perhaps. Or you talk about something else entirely.

Be Topical

All conferences want sessions that people will think are topical, important, the future. Topicality sells conference tickets.

At some point, object-orientation, client/server, Year 2000, the Internet, web services, Agile, Mobile were on the horizon. If you can pitch a session on something that people have heard of, that sounds important, but about which they know very little, you have an excellent chance of being chosen.  The topics above have faded somewhat. Agile and Mobile may have peaked. So what’s next? Continuous Delivery? DevOps? Internet of Everything? Shift-Left? MicroServices? Try and spot the wave and get on it before it becomes mainstream – if you can.

Don’t Trot Out the theme, Use it as a Guide or Challenge it

Do not write a title that includes the words of the theme, unless the theme is just one or two words. But if the theme allows it, embrace it. This year’s EuroSTAR theme is ‘Diversity, Innovation, Leadership’. A title like ‘Innovative Leadership in a Diverse Project’ might get some attention, but you had better back it up with some facts and a good story. Otherwise – it’s a sure-fire loser.

When I read an abstract, I imagine removing all the words that appear in the theme. If the abstract still makes sense, then the theme words were added as an afterthought. As tennis umpires might call – Out!

Challenge the theme but don’t undermine it. For example, Eurostar 2002 had a theme ‘The Value of Testing’. I chose as my title, ‘What is the Value of Testing and how can we increase it?’ It seemed to me that most people would stick ‘value’ in their title and give the same old talks as before. I wanted to challenge it and explore what value really meant in this context. I got a great talk out of it. In that case, the title came first, the talk came later.

Use the theme as a starting point, not as some words you can insert into an existing abstract. It can be spotted a mile away.

Make it Easy for the Reviewer; Make it Hard for the Reviewer

The reviewers will be reading and scoring tens, possibly hundreds of abstracts. They are looking to get through your abstract quickly and to score it with confidence so they can say, “this is fantastic” or “this is out”. Obviously you want the first reaction. If you write too much text, fill it with jargon, tell a vague story with a non-specific punch-line – expect reviewers to gibe you a low score.

Consider using a journalistic approach to bring the facts our quickly or use ‘Kipling’s Honest Serving Men’ - look it up if you don’t know it.

Make it hard for the reviewer? The reviewers will have in the back of their mind that 80% or 90% of the submissions will have to be rejected. So their reviews are a filtering process. Don’t make it easy for them to discard your proposal.

Say Something New or Say Something Important (or Both)

This seems obvious (like much of my advice here), but again and again, we see people proposing titles like ‘Seven habits of really effective testers’, ‘Ten Laws of Software Testing’, ‘Top Ten mistakes made...’ and so on. Now, there’s nothing wrong with these titles, and I’m sure I have seen several excellent presentations using each of these titles. But that’s the problem – it’s been done before.

  • If your top tips are things like, keep learning, think critically, question everything, remember your goal – I’m afraid everyone on the programme committee has heard these things a hundred times before. Your abstract might well appear to be a trite list of platitudes. Very easy to score low.
  • You might get away with ‘My top tips for using tool X’. But although it’s new and technically strong, it might be really obscure and get precisely 4 people attending. No good. Pitch it at a tools conference.
  • Now, most conferences have a lot of inexperienced people and first-time conference goers. These people need some basic or introductory sessions. It might seem like a step backwards for someone as experienced as you to be talking about ‘the basics’, but perhaps you can recount your own experience learning your trade, or coaching, mentoring your own team. A proportion of all conference talks will be focused on beginners. Focus on HOW you learned or taught or coached or mentored – the WHAT is probably well known already.

Writing a Perfect Abstract that Fails

One more mistake that is often made is this. You have a good title. Your opening paragraph sets the scene. You tell your story in a concise, engaging way. Your three key points are well made. And the reviewer gives you a low score. Why is that?

Perhaps the story really could be told in 3 minutes. The reviewer got all they needed from the abstract – so why go to the talk? Don’t forget the abstract is NOT the talk. The abstract is a sales document, like a CV. It must make people think they should invest 45 minutes of their time in listening to your story. I’m not saying that ‘you should not give the game away’. You need to leave your reviewer (and conference attendee) with the feeling that they want to go and hear you tell your story in person.

Read your own abstract and ask yourself, “would I want to go to this talk?” Ask a colleague or your boss whether they would they want to hear the story.

Some Really Dumb Errors

Finally, some loose end stupid mistakes. Don’t get caught out this way.

  1. You tell the reviewers how to do their job
  2. You try and sell a tool, a service, a brilliant project rather than a conference talk
  3. Your abstract patronises the reader
  4. Your talk has been given at seven other conferences this year
  5. You contradict yourself in your abstract
  6. You criticise the theme, the chair, the program team or the conference
  7. You personally attack other professionals and promise to do so in your talk
  8. You promise to be controversial and aren’t
  9. You claim to be the inventor of something that already exists
  10. You submit ten mediocre abstracts that are doomed to fail, when one good one might have succeeded
  11. You promise common sense and make highly dubious claims
  12. You steal someone else’s abstract and pretend it is your own
  13. Your English isn’t that good and you don’t get someone else to review it first
  14. You submit a bad abstract that has been rejected by several other conferences.

A Good Abstract is Worth the Effort

Keynotes are usually chosen by the chair or program committee. But there is no magic pass for experienced speakers to get into good conference programs. Pretty much, the experienced speakers are in the same situation as every other submitter. Everyone has to write a good abstract. The old-hands do have one advantage – they have experience of writing good abstracts. They know what works and what doesn’t. They are a known quantity so they might appear to be a ‘low risk’ (although being known can also count against you if you are known to bore or offend people).

More importantly, all conferences want to select some speakers that are new faces, fresh blood and who will being new ideas to an audience. So as an inexperienced or first-time conference speaker you have an advantage too. Use the opportunity to submit to get yourself on stage and in front of your peers and tell your story. There’s no better feeling for a speaker than people telling you they enjoyed your talk. So go for it – and put the effort into writing a good proposal. After that, writing the talk is easy ;O)

I wish you the best of luck.



Tags: #conferenceproposals

Paul Gerrard My linkedin profile is here

First published 22/06/2016

I want to use this post to get a few things off my chest. Most people who have an interest in remaining or leaving the EU recycle other people posts that align with their point of view. I am no different and where I thought something merited a repost – I've done it. Mostly on twitter, with a little on Facebook.

Anyway - as voting day looms, here are my closing thoughts on the whole affair.

Firstly, the pretext for the referendum had little to do with Europe. David Cameron offered to hold one as part of a deal to postpone Tory splits and arguments over Europe until after last year's election. The referendum was a bargaining chip in a rather grubby deal made inside the Tory party. With hindsight, he regrets this, as the continuing implosion of all opposition meant he would have been elected anyway.

For weeks, in the build up to the formal campaign it seemed like the facts will out, that people would make the decision on the basis of information. Being a rational sort, all looked good to me. People would make a well-informed decision. The clear and present shortcomings of the EU are easily articulated. The benefits are harder to pin down monetarily, but are substantial.

The performance of the Remain campaigners has been rather limp and incompetent. The Tory leadership have argued for remaining but you can see their heart is not in it. On other days, those same people would be harsh critics of the organisation they profess to support. The performance of Labour has been pathetic, but worse it has been late in coming. The biggest dent in the voting numbers may be due to Labour failing to take a position in time. Many labour voters have not registered or because of poor leadership, may vote "against the Tories" - but the wrong way.

The behaviour of the Brexiters has been disgraceful, disrespectful, undemocratic and frankly, un-British. With their distortion of fact, the fabrication of arguments, the rabid anti-foreigner rhetoric the Brexiters have adopted campaign slogans and arguments and expressed views that used to be confined to extremist right groups or oddballs like UKIP. No more it seems. The views that, when expressed publicly, meant you were kicked out of office or out of a mainstream party altogether have become mainstream. This is a disgrace and shames our political system.

It's a pretty sorry state of affairs.

So bear with me, and let me summarise the main issues from my own personal perspective. If you don't know what the issues are, look here here for an example list: http://www.bbc.co.uk/news/uk-politics-eu-referendum-36027205

I'll highlight some of the facts and misrepresentations. These figures come from the BBC. Some people might argue the BBC is biased. I think the BBC, with all it's own faults (some of which mirror the EU's) is, when all else is considered - a rock.

The costs and economics of leaving. The gross contribution in 2015 was £17.8bn but the UK rebate was worth £4.9bn. £4.4bn was also paid back to the UK government for farm subsidies and other programmes. The net or actual payment is 8.5bn per year which amounts to 163m per week. Certainly not the 350bn quoted by Brexit. You can see in the BBC page, that Brexit argue that this net payment can be used to fund other things (but the Tories have shown no enthusiasm for this in the past). They have decided to lie, and everyone knows the bigger the lie, the more likely it is to be believed.

The net loss to the economy of disturbing, let alone leaving, the single market outweighs our payments many times over. In the period 30 May to 13 June, the stock market lost more than 300 billion pounds in value. The improving chances of Brexit caused a major exit from UK shares and Sterling. Shares have recovered as the polls show Remain to be recovering ground again. But given that a large proportion of all of our pensions are invested in the UK stock market, everyone with a pension in the UK suddenly became poorer - we're talking thousands of pounds poorer - at a stroke. Consider the numbers. Investors were 300 billion pounds poorer in just two weeks. Compare that with the COST of membership of 8.5 billion pounds.

We'd be INSANE to leave the UK. Of all the countries inside and outside the EU, only Russia and Donald Trump think Brexit is a good idea to vandalise our economy in this way.

The economic argument is one perspective. The statistics used to make a case for leaving across all of the issues are, when the rhetoric is peeled back, of only marginal importance.

Migration is the Brexiters trump card supposedly. If we leave the EU, and refuse to trade with EU countries on the current EU terms, much of our trade with Europe will be suspended. Typically, the largest companies exporting aerospace technologies, other high end manufacturing and services at scale could be forced to adopt emergency plans. Hundreds or thousands of companies might be affected and go out of business, or choose to base their business elsewhere. Many companies have been making emergency Brexit plans as a precaution. No future government could defend that situation. So we would have to do business with the EU on their terms. Terms which include free movement of people.

So leaving the EU would cause a chilling effect on our economy - it would be affected adversely but no on knows by how much, but make no difference to the numbers of EU migrants. To think differently is fantasy.

The so called loss of sovereignty doesn't stand up to much scrutiny. Whenever, we join any organisation - we hope to benefit by being members, and we lose a little sovereignty as a consequence. What have we lost so far? Only a few percent of UK laws derive from the EU. our most important ones (e.g. common law, tax, criminal, defence etc.) are unaffected. EU laws are almost entirely focused on protecting workers rights. These rights benefit all citizens of EU states. The only people who would benefit from losing these rights are owners of companies who wish to run companies along, what to most people would appear to be, Dickensian lines. Cruel, crooked employers who currently outsource work to other countries anyway. (Check out the Brexit employer supporter backgrounds).

These laws protect your rights. Why would you want to lose them? The EU, with our involvement and agreement define these laws. The UK judiciary applies them. The European Court of Justice exists to resolve inter-member disputes, or gross breaches. Doesn't that seem reasonable?

The same case for remaining in the EU can be made across all of the issues. Common sense says, it would be a foolish thing to do.

Enough - the facts don't provide much support to change our status in the EU. Michael Gove's astonishing suggestion that we can't trust 'experts' or anyone with an opinion whose view do not align with his are a corruption. People who our government employ to conduct research, act as guardians of our laws and economy are no longer to be trusted. The Bank of England is not to be trusted. The Institute of Fiscal Studies, not to be trusted. The IMF, not to be trusted. Our trading partners, inside and outside the EU - are not to be trusted.

Who the hell can we trust then?

Apparently, we can trust Newt Faced Loser Gove - most hated, incompetent and eventually fired Education Secretary. Or swivel-eyed over-ambitious Boris, known to be economic with the truth, always good for a quote or a laugh, rarely answering a direct question. Or most scary of all, Nigel Farage. The foreigner-hating, bigoted fruitcake, who is still under a cloud for fiddling his expenses whilst wasting the time and losing the good will of EU MPs.

Perhaps we can just trust the Tories? Those pillars of society, so desperate to get in to power they break the election rules in marginal seats to influence undecided voters with battle buses, parachuited-in ministers and creepy bullying activists. As for Labour - give me a break.

All in all, it's a rather public and embarrassing performance on all sides. Countries in and outside the EU look on, perplexed that we should so publicly exhibit the worst of our politics and nature and risk making the biggest mistake in generations.

For heavens sake, use your COMMON SENSE and vote to REMAIN, stop this madness and let's get down to sensible life again.



Tags: #EU #Referendum

Paul Gerrard My linkedin profile is here

First published 09/06/2016

I read a fairly, let's say, challenging, article on the DevOps.com website here: http://devops.com/2016/06/07/devops-killed-qa/

It's a rather poor, but sadly typical, misrepresentation or let's be generous miunderstanding of the "state of the industry". The opening comment gives you the gist.

"If you work in software quality assurance (QA), it’s time to find a new job."

Apparently DevOps is the 'next generation of agile development ... and eliminates the need for QA as a separate entity'. OK maybe DevOps doesn't mandate or even require independent test teams or testers so much. But it does not say testing is not required. Whatever.

There then follows a rather 'common argument' - I'd say eccentric - view of DevOps at the centre of a Venn diagram. He then references somene elses' view that suggests DevOps QA is meant to prevent defects rather than find them but with all due respect(!) both are wrong. Ah, now we get to the meat. Nearly.

The next paragraph conflates Continuous Delivery (CD), Continuous Integration and the 'measurement of quality'. Whatever that is.

"You cannot have any human interaction if you want to run CD."

Really?

"The developers now own the responsibility rather than a separate entity within the organization"

Right. (Nods sagely)

"DevOps entails the use of vendors and tools such as BUGtrackJIRA and GitHub ..."

"To run a proper DevOps operation, you cannot have QA at all"

That's that then. But there's more!

"So, what will happen to all of the people who work in QA? One of the happiest jobs in the United States might not be happy for long as more and more organizations move to DevOps and they become more and more redundant."

Happy? Er, what? (Oh by the way, redundant is a boolean IMHO).

Then we have some interesting statistics from a website http://www.onetonline.org/link/summary/15-1199.01 I can't say I know the site or the source of data well. But it is entirely clear that the range of activities of ISTQB qualified testers have healthy futures. In the nomenclature of the labels for each activitiy the outlook is 'Bright' or 'Green'. I would have said, at least in a DevOps context that their prospects were less than optimal, but according to the author's source, prospects are blooming. Hey ho. Quote a source that contradicts one's main thesis. Way to go!

But, hold on - there really is bad news ...

"However, the BLS numbers are likely too generous because the bureau does not yet recognize “DevOps” as a separate profession at all"

So stats from an obviously spurious source have no relevance at all. That's all right then.

And now we have the killer blow. Google job search statistics. Da dah dahhhhh!

"As evidence, just look at how the relative number of Google searches in Google Trends for “sqa jobs” is slowly declining while the number for “devops jobs” is rapidly increasing:"

And here we have it. The definitive statistics that prove DevOps is on the rise and QA jobs are collapsing!

qa jobs vs devops jobs

"For QA, the trend definitely does not look good."

So. That's that. The end of QA. Of Testing. Of our voice of reason in a world of madness.

Or is it? Really?

I followed the link to the Google stats. I suggest you do the same. I entered 'Software Testing Jobs' as a search term to be added and compared on the graph and... voila! REDEMPTION!!!

Try it yourself, add a search term to the analysis. Here is the graph I obtained. I suggest you do the same. Here's is my graph:

Now, our American cousins tend to call testers and testing - QA. We can forgive them, I'm sure. But I know the term testers is more than popular in IT circles over there. So think on this:

The ratio of Testers v DevOps jobs is around five to one. Thats testers to ALL  JOBS IN DEVOPS IS FIVE TO ONE.

ARE WE CLEAR?

So. A conclusion.

  1. Don't pay attention to blogs of people with agendas or who are clearly stupid.
  2. Think carefully about the apparent sense but clear nonsense that people put on blogs.
  3. Be confident that testing, QA or whatever you call it is as important now as it was forty years ago and always will be.

It's just that the people who do testing might not be called testers. Forever.

Over and out.

VOTE REMAIN!

 



Tags: #testing #DevOps #QA #DevOpsKillingQA

Paul Gerrard My linkedin profile is here