Paul Gerrard My linkedin profile is here My Mastodon Account
My experiences in the Test Engineering business; opinions, definitions and occasional polemics. Many have been rewritten to restore their original content.
Paul Gerrard My linkedin profile is here My Mastodon Account
First published 01/06/2016
A recent study from the University of Oxford makes for interesting reading:
If programmers have a 50/50 chance of being replaced by robots, we should think seriously on how the same might happen to testers.
Machine Learning in testing is an intriguing prospect but not imminent. However, the next generation of testing tools will look a lot different from the ones we use today.
For the past thirty years or so we have placed emphasis on test automation and checking. In the New Model for Testing, I call this 'Applying'. We have paid much less attention to the other nine - yes, nine - test activities. As a consequence, we have simple robots to run tests, but nothing much to help us to create good tests for those robots to run.
In this paper, I am attempting to identify the capabilities of the tools we need in the future.
The tools we use in testing today are limited by the approaches and processes we employ. Traditional testing is document-centric and aims to reuse plans as records of tester activity. That approach and many of our tools are stuck in the past. Bureaucratic test management tools have been one automation pillar (or millstone). The other pillar – test automation tools – derive from an obsession with the mechanical, purely technical execution activity and is bounded by an assertion that many vendors still promote – that testing is just bashing keys or touchscreens which tools can do just as well.
The pressure to modernise our approaches, to speed up testing and reduce the cost and dependency on less-skilled labour means we need some new ideas. I have suggested a refined approach using a Surveying metaphor. This metaphor enables us to think differently on how we use tools to support knowledge acquisition.
The Survey metaphor requires new collaborative tools that can capture information as it is gathered with little distraction or friction. But they can also prompt the user to ask questions, to document their thoughts, concerns, observations and ideas for tests. In this vision, automated tools get a new role – supportive of tester thinking, but not replacing it.
Your pair in the exploration and testing of systems might soon be a robot. Like a human partner, they will capture the knowledge you impart. Over time they will learn how to support and challenge you and help you to navigate through your exploration or Surveying activity. Eventually, your partner will suggest ideas that rival your own. But that is still some way off.
To download the full paper, go to the Tools Knowledge Base.
Paul Gerrard My linkedin profile is here My Mastodon Account
First published 09/05/2013
But there are also some significant forces at play in the IT industry and I think the testing community, will be coming under extreme pressure. I summarise this change as ‘redistributed testing’: users, analysts, developers and testers will redistribute responsibility for testing by, wait for it, collaborating more effectively. Testers probably won’t drive this transition, and they may be caught out if they ignore the winds of change.
In this article, I’ll suggest what we need from the leaders in our industry, the market and our organisations. Of course, some responsibility will fall on your shoulders. Whether you are a manager or technical specialist, there will be an opportunity for you to lead the change.
SaaS works as an enabler for very rapid deployment of new functionality and deployment onto a range of devices. A bright idea in marketing in the morning can be deployed as new functionality in the afternoon and an increasing number of companies are succeeding with ‘continuous delivery’. This is the promise of SaaS.
Most organisations will have to come to terms with the new architectures and a more streamlined approach to development. The push and pull of these forces will make you rethink how software available through the Internet is created, delivered and managed. The impacts on testing are significant. If you take an optimistic view, testing and the role of testers can perhaps, at last, mature to what they should be.
In many ways, in promoting the testing discipline as some of us have done for more than twenty years, we have been too successful. There is now a sizable testing industry. We have certification schemes, but the schemes that were a step forwards fifteen years ago, haven’t advanced. As a consequence, there are many thousands of professional testers, certified only to a foundation level who have not developed their skills much beyond test script writing, execution and incident logging. Much of what these people do are basically ‘checking’ as Michael Bolton has called it.
Most checking could be automated and some could be avoided. In the meantime, we have seen (at last) developer testing begin to improve through their adoption of test-driven and behaviour-driven approaches. Of course, most of the testing they do is checking at a unit level. But this is similar to what many POFTs spend much of their time doing manually. Given that most companies are looking to save money, it’s easy to see why many organisations see an opportunity to reduce the number of POFTs if they get their developers to incorporate automated checking into their work through TDD and BDD approaches.
As the developers have adopted the disciplines and (mostly free) tools of TDD and BDD, the testers have not advanced so far. I would say, that test innovation tends to be focused on the testers’ struggle to keep pace with new technologies rather than insights and inventions that move the testers’ discipline forward. Most testing is still manual, and the automated tests created by test teams (usually with expensive, proprietary tools) might be better done by developers anyway.
In the test management space, one can argue that test management is a non-discipline, that is, there is no such thing as test management, there’s just management. If you take the management away from test management – what’s left? Mostly challenges in test logistics – or just logistics – and that’s just another management discipline isn’t it?
What about the fantastic advances in automation? Well, test execution robots are still, well, just robots. The advances in these have tracked the technologies used to build and deliver functionality – but pretty much that’s all. Today’s patterns of test automation are pretty much the same as those used twenty or more years ago. Free test automation frameworks are becoming more commonly used, especially for unit testing. Free BDD tools have emerged in the last few years, and these are still developer focused but expect them to mature in the next few years. Tools to perform end-to-end functional tests are still mostly proprietary, expensive and difficult to succeed with.
The test management tools that are out there are sophisticated, but they perform only the most basic record keeping. Most people still use Excel and survive without test management products that only support the clerical test activities and logistics and do little to support the intellectual effort of testers.
The test certification schemes have gone global. As Dorothy Graham says on her blog the Foundation met its main objective of “removing the bottom layer of ignorance” about software testing. Fifteen years and 150,000+ certificate awards later, it does no more than that. For many people, it seems that this ‘bottom layer of knowledge’ is all they may ever need to get a job in the industry. The industry has been dumbed-down.
But, continuous delivery is a machine that consumes requirements. For the rapid output of continuous delivery to be acceptable, the quality of requirement going into that machine must be very high. We argue that requirements must be trusted, but not perfect.
The core of the redistribution idea is that the checking that occupies much of the time of testing teams (who usually get involved late in projects) can be better done by developers. Relieving the testers of this burden gives them time to get involved earlier and to improve the definition of software before it is built. Our proposal is that testers apply their critical skills to the creation of examples that illustrate the behaviour of software in use in the requirements phase. Examples (we use the term business stories) provide feedback to stakeholders and business analysts to validate business rules defined in requirements. The outcome of this is what we call trusted requirements.
In the Business Story Pocketbook, we define a trusted requirement as “… one that, at this moment in time, is believed to accurately represent the users’ need and is sufficiently detailed to be developed and tested.” Trusted requirements are specified collaboratively with stakeholders, business analysts, developers and testers involved.
Developers, on receipt of validated requirements and business stories can use the stories to drive their TDD approach. Some (if not all) of these automated checks form the bulk of regression tests that are implemented in a Continuous Integration regime. These checks can then be trusted to signal a broken build. As software evolves, requirements change; stories and automated checks change too. This approach, sometimes-called Specification by Example depends on accurate specifications (enforced by test automation) for the lifetime of the software product. Later (and fewer) system testers have reduced time to focus on the more subtle types of problem, end to end and user experience testing.
The deal is this: testers get involved earlier to create scenarios that validate requirements, and that developers can automate. Improving the quality of requirements means the target is more stable, developers produce better code, protected by regression tests. Test teams, relieved of much of the checking and re-testing are smaller and can concentrate on the more subtle aspects of testing.
With regards to the late testing in continuously delivering environments, testers are required to perform some form of ‘health check’ prior to deployment, but the days of teams spending weeks to do this are diminishing fast. We need fewer, much smarter testers working up-front and in the short time between deployment and release.
The pace of technical change is so high that the old way of testing just won’t cut it. Some teams are discovering they can deliver without testers at all. The challenge of testing is perceived (rightly or wrongly) to be one of speed and cost (even though it’s more subtle than that of course). Testers aren’t being asked to address this challenge because it seems more prone to a technical solution and POFTs are not technical.
But the opportunities are there: to get involved earlier in the requirements phase; to support developers in their testing and automation; to refocus testing away from manual checking towards exploratory testing; to report progress and achievement against business goals and risks, rather than test cases and bug reports.
Some of the clichés of testing need to be swept away. The old thinking is no longer relevant and may be career limiting. To change will take some courage, persistence and leadership.
Developers write code; testers test because developers can’t: this mentality has got to go. Testing can no longer be thought of as distinct from development. The vast majority of checking can be implemented and managed by development. One potential role of a tester is to create functional tests for developers to implement. The developers, being fluent in test automation, implement lower level functional and structural tests using the same test automation. Where developers need coaching in test design, then testers should be prepared to provide it.
Testers don’t own testing: testing is part of everyone’s job from stakeholder, to users, to business analysts, developers and operations staff. The role of a tester could be that of ‘Testmaster’. A testmaster provides assurance that testing is done well through test strategy, coaching, mentoring and where appropriate, audit and review.
Testing doesn’t just apply to existing software, at the end: testing is an information provision service. Test activity and design is driven by a project’s need to measure achievement, to explore the capabilities, strengths and weaknesses so decisions can be made. The discipline of test applies to all artefacts of a project, from business plans, goals, risks, requirements and design. We coined the term ‘Project Intelligence’ some years ago to identify the information testers provide.
Testing is about measuring achievement, rather than quality: Testing has much more to say to stakeholders when its output describes achievement against some meaningful goal, than alignment to a fallible, out of date, untrusted document. The Agile community have learnt that demonstrating value is much more powerful than reporting test pass/fails. They haven’t figured how to do it of course, but the pressure to align Agile projects with business goals and risks is very pronounced.
Henry Kissinger said, “A leader does not deserve the name unless he is willing occasionally to stand alone”. You might have to stand alone for a while to get your view across. Dwight D Eisenhower gave this definition: “Leadership is the art of getting someone else to do something you want done because he wants to do it”.
Getting that someone else to want to do it might yet be your biggest challenge.
Tags: #futureoftesting #Leadership
Paul Gerrard My linkedin profile is here My Mastodon Account
First published 10/06/2014
In my previous article, ‘The testers and coding debate: Can we move on now?’ [1], I suggested that that:
We should move on from the ‘debate’ and start thinking more seriously about appropriate development approaches for testers who need and want more technical capabilities.
As I also said in the original article [1], I am not an expert in the teaching of programming languages or computer science in general. I have not studied the many writings of experts in this field who, needless to say, tend to be academics. But I am trusting an international team of academics who have [2]. They reviewed the research literature in this area (101 sources were cited). Their concluding remarks include this:
“We conclude that despite the large volume of literature in this area, there is little systematic evidence to support any particular approach. For that reason, we have not attempted to give a canonical answer to the question of how to teach introductory programming.”
So, with the belief that I am on reasonably safe ground, I will suggest that most commercially available programming courses are inappropriate for teaching programming to software testers and that a different approach is required. The programming skills taught to testers should be driven by the need to improve capabilities and must include fundamental code design principles and automated checking methods.
In the remainder of this article, I want to propose an approach to teaching non- or partly-technical testers enough programming skills to support broader testing-related capabilities.
The polarising question, ‘to code or not to code’ is unhelpful. It makes much more sense to ask about capabilities. Let us ask, ‘what software-related capabilities does a tester need to do their job?’ Well, of course, it depends on the job. Here is a non-definitive list of tester capabilities that some testers (and their teams) might find useful in different situations (prefix each with ‘ability to…’):
a) Read and understand the flow of control of code, fundamentals of algorithms
b) Construct, execute and interpret tests using a unit test framework (in a TDD, BDD context)
c) Construct, execute and interpret tests using a GUI test tool (in a system testing context)
d) Challenge and discuss with developers the coverage, value, thoroughness of unit tests
e) Write simple utilities to extract, filter, sort, merge, generate test data from a structured database and non-structured XML, HTML, CSV, plain text sources
f) Use or adapt existing libraries to construct tests of networked devices, web servers, services, mail, FTP, terminal servers
… and so on.
I should explain why I used the term capability, rather than competency. In some circles, competency implies having qualifications, being certified etc. So to avoid being drawn into that debate, I’ll use the word capability from now on. However, the word capability is no less of a challenge. The SEI’s Capability Maturity Model (Integrated) [3] is also a powerful and polarising force. CMMI is a certification industry in its own right. Even so, capability is the most appropriate term for current purposes.
The capabilities (or team/project requirements) above will be familiar to most testers. Usually, non-technical testers will consult their development colleagues, or bring in a test automation specialist or tool-smith to help out. Alternatively, they might have to acquire the skills to do the technical tasks themselves. What technical skills are required for each capability above?
Several attempts at programming (or software engineering) competency levels have been published. One commonly referred to is here [4] but it focuses on aspects of programming, systems architecture, software engineering skills. It looks more like a software engineering syllabus than a competency roadmap. For now, let’s park the idea of a competency or maturity scheme.
Now, the capabilities above are not in any particular order. They might be required in a variety of contexts and the skills profile for each capability differs. Although the fundamental knowledge required to write code can be taught, the way you become more proficient is to take on increasingly ambitious assignments that require deeper knowledge. It is your ambition and willingness to learn new things that determine how far you progress as a programmer. Let me illustrate this with reference to the libraries you might need to understand to build some functionality.
In one respect, it’s just the libraries you use that varies, but as you tackle more advanced problems the skills you need become more sophisticated. Here’s one way of looking at the levels of capability. Read as ‘ability to do/use…’:
It is reasonable to expect that a junior programmer should be capable of level 3, an experienced programmer level 4. A programmer needs the programming language skills but also the ability and confidence to make their own choices. Testers need to understand how the progression from beginner to gain confidence and some independence (what I’ve characterised as level 3 above).
All this talk of capabilities is fine but the questions a tester needs answering are:
I am going to suggest that firstly, the motivation to learn is the need to acquire some tool or utility to improve your capability and help you in your job. It is all very well learning to write code by rote to code some dummy programs that are of little value outside the class. But if, at the end of the course, you are left with some working code that has value, you have the beginnings of a toolkit that enhances your capability and that you can evolve, improve and share.
The best way to learn a particular aspect of programming is to have working examples that you write yourself or samples that you are provided with, that you can adapt and improve to suit your own purposes. Of course, in a training course, some example code that performs a useful function has to be limited in scope to allow it to be usable for teaching. It might also have to be a suboptimal solution to the problem at hand. For example, it might not use the most sophisticated libraries or the most efficient algorithm etc. Inevitably, the offered solution to a set problem can only ever be one of many possible. Other real-world factors outside the classroom might determine what the ultimate solution should look like.
Of course, you also need a safe and assistive programming and testing environment and a good teacher.
Almost all commercial programming training courses aim to teach the basics of a selected programming language. They focus almost entirely on the syntax of the language so what you learn is how to use the fundamental language constructs. Now, this is fine if you are already a programmer or you want to recognize the elements of code when you see them. But they don’t prepare you very well if you are a beginner and want to perform a real-world task:
Programming classes for testers need to address all these aspects.
What I’m proposing here is a set of design principles for programming training for testers and I’ve suggested one possible course structure that has a specific set of capabilities in mind in the Appendix.
A set of course design principles explain the thinking behind the Road Map and the content of specific courses. The word ‘client’ refers to the tester or the employer of the testers to be trained.
Task-Oriented
The course will specifically aim to teach people through example and practice to write code to perform specific tasks. The aim is not to learn every aspect of the selected language.
Pragmatic, Not Perfect
The course exercises, examples and take-away solutions will not be perfect. They may be less efficient, duplicate code available in pre-existing libraries, they might not use objects – but they will work.
All Courses Begin with a Foundation Module
The first, foundation module will cover the most fundamental aspects of program design and process and the most basic language elements. All subsequent modules depend on the Foundation module.
Content Driven by Demand for Capabilities
The course topics taught (the philosophy, programming language elements) will be sufficient to meet the requirements of the exercises that meet the requirements of a capability.
A Language Learning Process is Part of the Course
Because the course will not cover all language elements or available libraries, there will be gaps in knowledge. The course must allow you to practice how to find the right language construct, useful libraries and suitable code samples from other (usually online) source so you are not lost when you encounter a new requirement.
A Scripting Language Should be the Default
Where the client has no preference for a particular language as the subject for the course, a scripting language (Python, Ruby, Perl etc.) would be selected. The initial goal must be to learn the fundamentals of writing software in an easy-to use language. Popular languages like C++, C# and Java have a steeper learning curve. However, if your developers write all software in Java, for example – it is reasonable for the testers to learn Java too (although Java might not be the easiest ‘first language’ to learn).
Integrated Development Environments (IDEs)
IDEs are valuable resources for professional programmers. However, initial course modules would use simple, language sensitive editors (Notepad++, Gedit, Emacs etc.) Use of a specific IDE would be the subject of a dedicated course module.
Open Source Wherever Possible
To avoid the complications of proprietary licensing and to facilitate skills and software portability, open source language interpreters/compilers, test frameworks and language editors should be used.
Minimal Coupling Between Course Modules
The focus of each module is to build a useful utility and for each module to be independent of others (which might not be required). Of course, for modules to be independent, some topics will be duplicated across modules. Courses must allow the trainer to skip (or review) the topics already covered to avoid covering the same topics twice in different modules.
And with that long list of principles, let’s sketch out how the structure of a course might look.
The Structure will comprise a set of modules including an initial Foundation module and other modules that each focus on building a useful utility. Appendix A presents a possible course consisting of four modules that will give students enough knowledge to write and use:
This four-module course is just an example. Although the Foundation Module is quite generic and should give all students a basis to write simple programs and make progress, it could be enhanced to include more advanced topics such as Regular Expressions or XML Processing or Web site messaging, for example.
The overall theme of the example course is the testing of web sites and web services and manipulation of the data retrieved. Obviously, courses with a different focus are likely to be required. For example:
Of course, the precise content of such courses would need to be discussed with the client to ensure the course provider can meet their requirements.
In this paper, I have tried to justify the situations where it would be appropriate for testers to learn the basics of a programming language and suggest that programming knowledge, even if superficial, could benefit the tester.
Existing, commercial programming courses tend to be focused on teaching the entirety of a language and tend to be aimed at experienced programmers wishing to learn a new language. Testers need a differently focused course that will teach them a valuable subset of the technology that provides specific capabilities relevant to their job. Programming courses for testers need to be designed to support this somewhat different requirement.
A set of guiding principles for creating such courses is presented and a course description, for testing web sites and services through APIs, offered as an example. Other, differently focused courses would obviously be required.
This is a work in progress and comments, criticisms and suggestions for improvement are welcomed. We are interested to work with other organizations with specialisms in different technologies or business domains to refine this approach. We are also seeking to work with potential clients to develop useful course materials.
1.Paul Gerrard, The testers and coding debate: Can we move on now?
2.A Survey of Literature on the Teaching of Introductory Programming, Arnold Pears et al.
3.CMMI Institute, http://cmmiinstitute.com/
4.Sijin Joseph, Programmer Competency Matrix
Paul Gerrard My linkedin profile is here My Mastodon Account
First published 21/07/2015
This year, I'm running a half-day tutorial on the Tuesday 3 November at EuroSTAR in Maastricht. Visit the conference page here..
I've created a short elevator pitch to promote the session.
Tags: #Eurostar #NewModelTesting
Paul Gerrard My linkedin profile is here My Mastodon Account
First published 09/06/2016
I read a fairly, let's say, challenging, article on the DevOps.com website here: http://devops.com/2016/06/07/devops-killed-qa/
It's a rather poor, but sadly typical, misrepresentation or let's be generous miunderstanding of the "state of the industry". The opening comment gives you the gist.
"If you work in software quality assurance (QA), it’s time to find a new job."
Apparently DevOps is the 'next generation of agile development ... and eliminates the need for QA as a separate entity'. OK maybe DevOps doesn't mandate or even require independent test teams or testers so much. But it does not say testing is not required. Whatever.
There then follows a rather 'common argument' - I'd say eccentric - view of DevOps at the centre of a Venn diagram. He then references somene elses' view that suggests DevOps QA is meant to prevent defects rather than find them but with all due respect(!) both are wrong. Ah, now we get to the meat. Nearly.
The next paragraph conflates Continuous Delivery (CD), Continuous Integration and the 'measurement of quality'. Whatever that is.
"You cannot have any human interaction if you want to run CD."
Really?
"The developers now own the responsibility rather than a separate entity within the organization"
Right. (Nods sagely)
"DevOps entails the use of vendors and tools such as BUGtrack, JIRA and GitHub ..."
"To run a proper DevOps operation, you cannot have QA at all"
That's that then. But there's more!
"So, what will happen to all of the people who work in QA? One of the happiest jobs in the United States might not be happy for long as more and more organizations move to DevOps and they become more and more redundant."
Happy? Er, what? (Oh by the way, redundant is a boolean IMHO).
Then we have some interesting statistics from a website http://www.onetonline.org/link/summary/15-1199.01 I can't say I know the site or the source of data well. But it is entirely clear that the range of activities of ISTQB qualified testers have healthy futures. In the nomenclature of the labels for each activitiy the outlook is 'Bright' or 'Green'. I would have said, at least in a DevOps context that their prospects were less than optimal, but according to the author's source, prospects are blooming. Hey ho. Quote a source that contradicts one's main thesis. Way to go!
But, hold on - there really is bad news ...
"However, the BLS numbers are likely too generous because the bureau does not yet recognize “DevOps” as a separate profession at all"
So stats from an obviously spurious source have no relevance at all. That's all right then.
And now we have the killer blow. Google job search statistics. Da dah dahhhhh!
"As evidence, just look at how the relative number of Google searches in Google Trends for “sqa jobs” is slowly declining while the number for “devops jobs” is rapidly increasing:"
And here we have it. The definitive statistics that prove DevOps is on the rise and QA jobs are collapsing!
"For QA, the trend definitely does not look good."
So. That's that. The end of QA. Of Testing. Of our voice of reason in a world of madness.
Or is it? Really?
I followed the link to the Google stats. I suggest you do the same. I entered 'Software Testing Jobs' as a search term to be added and compared on the graph and... voila! REDEMPTION!!!
Try it yourself, add a search term to the analysis. Here is the graph I obtained. I suggest you do the same. Here's is my graph:
Now, our American cousins tend to call testers and testing - QA. We can forgive them, I'm sure. But I know the term testers is more than popular in IT circles over there. So think on this:
The ratio of Testers v DevOps jobs is around five to one. Thats testers to ALL JOBS IN DEVOPS IS FIVE TO ONE.
ARE WE CLEAR?
So. A conclusion.
It's just that the people who do testing might not be called testers. Forever.
Over and out.
Paul Gerrard My linkedin profile is here My Mastodon Account
First published 06/03/2015
This blog first appeared on the EuroSTAR blog in April 2014 (http://conference.eurostarsoftwaretesting.com/2014/courage-and-ambition-in-teaching-and-learning/)
In 2013, Cem Kaner (http://kaner.com) asked me to review a draft of the ‘Domain Testing Workbook’ (DTW) written by Cem in partnership with Sowmya Padmanabhan and Douglas Hoffman. I was happy to oblige and pleased to see the book published in October 2013. At 480 pages, it’s a substantial work and I strongly recommend it. I want to share some ideas we exchanged at the time and these relate to the ‘Transfer Problem’.
In academic circles, the Transfer of Learning relates to how a student applies the knowledge they gain in class to different situations or the real-world. In the preface of DTW, the transfer problem is discussed. Sowmya and Cem relate some observations of student performance in a final exam which contained:
The ‘Transfer of Learning’ challenge is an interesting and familiar topic to me.
Like many people, in my final school year, I sat A-Level examinations. And I got to know that taking the CISA exam after A-levels wasn't my cup of tea. In my chosen subjects – Mathematics, Physics and Chemistry – the questions in exams tended to focus on ‘point topics’ lifted directly from the syllabus. The questions were similar to the practice questions on previous exams. But I also sat Scholarship or S-Level exams in maths and physics. In these exams, the questions were somewhat harder because they tended to merge two or even more syllabus concepts into one problem. These were clearly harder to answer and required more imagination and I’m tempted to say, courage, in some respects. I recall a simple example (it sticks in my mind – exam questions have a tendency to do that, don’t they?)
Now the student would be familiar with the modulus of a number |X| being the absolute or positive value. X can be a positive or negative number. And the familiar ax2 + bx + c = 0 quadratic equation that can be often solved by trial and error but always solved using the quadratic formula. The quadratic with a modulus would not be familiar, however. So this problem demands a little more care. I leave it to you to solve.
Now, this ‘harder question style’ was (and still is) used by some universities in their own entrance exam papers. I sat university entrance exams which were exclusively of this pattern. Whether it is an effective discriminator of talent – I don’t know – but I got through them, thank goodness.
But my experience with testers not being able to transfer skills to real world or more complex contexts is a manifestation of the ‘transfer problem’. It seems to me that it’s not lack of intellect that causes people to struggle with problems ‘out of a familiar context’ but I’d like to consider two attitudes to teaching and learning that we should encourage – courage and ambition. For the first, I will draw a parallel with sports coaching.
Most years, I coach rowing at my local club. In rowing and in particular sculling, if a sculler makes a mistake, they can capsize the boat, fall into the river and get rather wet, so there’s a risk and the risk makes people unwilling to commit to the correct rowing technique. Correct technique demands that firstly, the rower gets their blades off the water which leaves the rower in a very vulnerable, unstable situation. They have to learn how to balance a boat before they can move a boat quickly so they must be confident first, skilled second and then they can actually apply their power to make the boat move quickly.
It’s a bit like taking the stabilisers (training wheels) off a pushbike – it takes some confidence and skill for a beginner rider to do that. Coaching and learning tightrope walking, skiing, climbing and gymnastics are all similar.
Athlete coaching technique involves asking athletes to have courage, to trust their equipment, the correct technique and the laws of physics and to not fear the water or a fall in the snow. In fact, coaches almost force people to fail so they recognise that failure doesn’t hurt so much and in fact, they can commit knowing the consequence of failure is not so bad after all.
I remember many years ago when I was learning to ski in a class of ten people – at one point on a new slope, the whole class was having difficulty. So the ski instructor took us to an ‘easier slope’. We struggled there too, but made some progress. Then we went back to the first slope. Remarkably, everyone could ski down the first slope with ease. In fact, the ski instructor had lied to us – he took us to a harder slope to ‘go back to basics’. It turned out that it was confidence that we lacked, not the skill.
Getting people to recognise that the risk isn’t so bad, to place trust in things they know, to have courage to try and keep trying can’t be learnt from a book or online course. It takes practice in the real world, perhaps in some form of apprenticeship and with coaching, not just teaching, support. Coaches must strongly challenge the people they coach, continuously.
The best that a book can do is present the student (and teacher) some harder problems like this with worked examples. If we expect the student to fail, we should still set them this kind of problem, but then the teacher/coach has to walk through the solution, pointing out carefully, that it’s not just allowed, but that it really is essential to ‘think outside the box/core syllabus’. Perhaps even to trust their hunches.
Coaches/trainers and testers both need courage.
The test design techniques are often taught as rote-procedures whereby one learns to identify a coverage item (a boundary, a state-transition, a decision in code) and then derive test cases to cover those items until 100% coverage is achieved. There is nothing wrong with knowing these techniques, but they always seem to be taught out of context. Practice problems are based on static, unambiguous, but above all, simple requirements or code that when the student sees a real, complicated, ambiguous, unstable requirement it’s no wonder they find it hard to apply the techniques effectively – or at all.
These stock techniques are often presented as a way of preparing documentation to be used as test scripts. They aren’t taught as test models having more or less effectiveness or value for money to be selected and managed. They are taught as clerical procedures. The problem with real requirements is you need half a dozen different models on each page, on each paragraph, even.
A key aspect of exploratory testing is that you should not be constrained but should be allowed and encouraged to choose models that align with the task in hand so that they are more direct, appropriate and relevant. But the ‘freedom of model choice’ applies to all testing, not just exploratory, because at one level, all testing is exploratory. I’ve said that before as well (http://gerrardconsulting.com/index.php?q=node/588).
In future, testers need to be granted the freedom of choice of test models but for this to work, testers must hone their modelling skills. We should be teaching what test models are and how models can be derived, compared, discarded, selected and used. This is a much more ambitious goal than teaching and learning the rote-procedures that we call, rather pompously, test design techniques. I am creating a full-day workshop to explore how we use models and modelling in testing. If you are interested or have suggestions for how it should work, I’d be very interested to hear from you.
We need to be more ambitious in what we teach and learn as testers.
Tags: #teaching #learning
Paul Gerrard My linkedin profile is here My Mastodon Account
First published 03/05/2013
The nice folk at Diaz Hilterscheid have made the video of my track talk “Continuous Delivery of Long Term requirements” at Agile Testing Days 2012.
The original write up ran: "Larger, multi-stakeholder projects take time to establish consensus. Usually, no individual is able (or available) to be the on-site customer, and even if one is nominated, they are accountable to multiple stakeholders and the consultation and agreement feedback loop can take weeks or months. Now, it could be said that an organisation that takes months to make up its mind can never be Agile and they are doomed always to use structured, staged, bureaucratic approaches.
How could a company used to (and committed to) managing stakeholder expectations in a systematic way over longer timescales take advantage of Agile approaches to development and delivery? Put it another way. If a corporate has a trusted set of business rules defined in requirements, can delivery of a system to meet those requirements be achieved in an Agile, continuous way? How can larger requirements, evolved over weeks or months be channelled into teams doing continuous delivery?
It’s all about trust. The requirements that feed the beast of continuous delivery must be trusted to be mature, complete and coherent enough to deliver the business value envisaged by stakeholders. In one way or another, the requirements must be tested and trusted enough to pour into larger or collaborating Agile teams. Note: *trusted*, not perfect. This talk explores how larger-scale requirements management and testing could drive a Continuous Delivery process. It’s not ‘pure Agile’, Rather it’s lean and close to being a factory-based approach, but it might be the way to achieve Agile delivery in a non-Agile business."
Paul Gerrard My linkedin profile is here My Mastodon Account
First published 09/05/2014
I’ve been meaning to write an article on writing conference abstracts and submissions for a while. I’m prompted to do this now because I’ve had to provide feedback to some of the many EuroSTAR submitters who asked for feedback. I can’t provide an individual response to the nearly 400 people who were unsuccessful. But I can provide some generalised feedback on my experience as Programme Chair for EuroSTAR, Host of the Test Management Forum, Co-Programme Chair for Testing in Finance, several Unicom and many BCS SIGiST events and others that I can’t recall. I've also given a few talks in my time.
In particular, I want to address this to the people who are trying to get their talks accepted and who might not yet have succeeded. Personally, I like to go to conference talks by a small number of speakers I know well and respect. But I also like to go to talks from ‘first-timers’. They often have much more energy and new insights that make conference talks so very interesting.
(By the way, if you want to pitch a session at the TMF, do let me know. I’m always on the lookout. Let me know your idea first.)
So, with no further ceremony. Here are my Do’s and Don’ts of conference submissions. Mostly, they are Don’ts. I hope you find them useful.
If you ignore the advice that has been put together by the programme team, then you are simply asking for trouble. But you will make it easy for the reviewer – they will discount your submission very quickly. These are the mistakes that I would say are really basic. Perhaps they are dumb too. See later.
Whether you are writing a CV, presenting yourself for an interview, or pitching an abstract for a conference talk, if you make a bad impression in your title or first paragraph, the reviewer will do you the courtesy of reading your abstract, for sure. But in their mind, they will be looking for further evidence to confirm their first impression and score the submission low.
If your title and opening words trigger a reaction (curiosity, excitement, horror even) they will read to the end with interest but the reviewer will be looking for evidence to score your abstract highly.
Aim to get a good reaction in the first few sentences of your submission.
Do not promise 250x faster regression testing, a tool/process/service that will transform the industry or the lives of the audience. If you have actually made $1 million out of your idea, then perhaps people would like to hear your story. Otherwise, choose a different tack.
If you are offering a story of how you transformed something in your own company, reviewers will check your profile that a) you worked for the company b) for a reasonable length of time and c) your experience fits the story you tell. Match your story to your experience. Do not under any circumstances lie. Unless misrepresentation and being fooled is a key aspect of your talk.
If you work for a tool vendor, I’m afraid it is really hard to get your story of how wonderful your tool is into a conference. It’s much better to talk about tool classification, or implementation of tools or true experiences of using a type of tool. Unfortunately, there is also some prejudice against speakers from tool vendors, particularly the larger ones. Better that you get a client to talk about how wonderful you are perhaps. Or you talk about something else entirely.
All conferences want sessions that people will think are topical, important, the future. Topicality sells conference tickets.
At some point, object-orientation, client/server, Year 2000, the Internet, web services, Agile, Mobile were on the horizon. If you can pitch a session on something that people have heard of, that sounds important, but about which they know very little, you have an excellent chance of being chosen. The topics above have faded somewhat. Agile and Mobile may have peaked. So what’s next? Continuous Delivery? DevOps? Internet of Everything? Shift-Left? MicroServices? Try and spot the wave and get on it before it becomes mainstream – if you can.
Do not write a title that includes the words of the theme, unless the theme is just one or two words. But if the theme allows it, embrace it. This year’s EuroSTAR theme is ‘Diversity, Innovation, Leadership’. A title like ‘Innovative Leadership in a Diverse Project’ might get some attention, but you had better back it up with some facts and a good story. Otherwise – it’s a sure-fire loser.
When I read an abstract, I imagine removing all the words that appear in the theme. If the abstract still makes sense, then the theme words were added as an afterthought. As tennis umpires might call – Out!
Challenge the theme but don’t undermine it. For example, Eurostar 2002 had a theme ‘The Value of Testing’. I chose as my title, ‘What is the Value of Testing and how can we increase it?’ It seemed to me that most people would stick ‘value’ in their title and give the same old talks as before. I wanted to challenge it and explore what value really meant in this context. I got a great talk out of it. In that case, the title came first, the talk came later.
Use the theme as a starting point, not as some words you can insert into an existing abstract. It can be spotted a mile away.
The reviewers will be reading and scoring tens, possibly hundreds of abstracts. They are looking to get through your abstract quickly and to score it with confidence so they can say, “this is fantastic” or “this is out”. Obviously you want the first reaction. If you write too much text, fill it with jargon, tell a vague story with a non-specific punch-line – expect reviewers to gibe you a low score.
Consider using a journalistic approach to bring the facts our quickly or use ‘Kipling’s Honest Serving Men’ - look it up if you don’t know it.
Make it hard for the reviewer? The reviewers will have in the back of their mind that 80% or 90% of the submissions will have to be rejected. So their reviews are a filtering process. Don’t make it easy for them to discard your proposal.
This seems obvious (like much of my advice here), but again and again, we see people proposing titles like ‘Seven habits of really effective testers’, ‘Ten Laws of Software Testing’, ‘Top Ten mistakes made...’ and so on. Now, there’s nothing wrong with these titles, and I’m sure I have seen several excellent presentations using each of these titles. But that’s the problem – it’s been done before.
One more mistake that is often made is this. You have a good title. Your opening paragraph sets the scene. You tell your story in a concise, engaging way. Your three key points are well made. And the reviewer gives you a low score. Why is that?
Perhaps the story really could be told in 3 minutes. The reviewer got all they needed from the abstract – so why go to the talk? Don’t forget the abstract is NOT the talk. The abstract is a sales document, like a CV. It must make people think they should invest 45 minutes of their time in listening to your story. I’m not saying that ‘you should not give the game away’. You need to leave your reviewer (and conference attendee) with the feeling that they want to go and hear you tell your story in person.
Read your own abstract and ask yourself, “would I want to go to this talk?” Ask a colleague or your boss whether they would they want to hear the story.
Finally, some loose end stupid mistakes. Don’t get caught out this way.
Keynotes are usually chosen by the chair or program committee. But there is no magic pass for experienced speakers to get into good conference programs. Pretty much, the experienced speakers are in the same situation as every other submitter. Everyone has to write a good abstract. The old-hands do have one advantage – they have experience of writing good abstracts. They know what works and what doesn’t. They are a known quantity so they might appear to be a ‘low risk’ (although being known can also count against you if you are known to bore or offend people).
More importantly, all conferences want to select some speakers that are new faces, fresh blood and who will being new ideas to an audience. So as an inexperienced or first-time conference speaker you have an advantage too. Use the opportunity to submit to get yourself on stage and in front of your peers and tell your story. There’s no better feeling for a speaker than people telling you they enjoyed your talk. So go for it – and put the effort into writing a good proposal. After that, writing the talk is easy ;O)
I wish you the best of luck.
Paul Gerrard My linkedin profile is here My Mastodon Account
First published 22/06/2016
I want to use this post to get a few things off my chest. Most people who have an interest in remaining or leaving the EU recycle other people posts that align with their point of view. I am no different and where I thought something merited a repost – I've done it. Mostly on twitter, with a little on Facebook.
Anyway - as voting day looms, here are my closing thoughts on the whole affair.
Firstly, the pretext for the referendum had little to do with Europe. David Cameron offered to hold one as part of a deal to postpone Tory splits and arguments over Europe until after last year's election. The referendum was a bargaining chip in a rather grubby deal made inside the Tory party. With hindsight, he regrets this, as the continuing implosion of all opposition meant he would have been elected anyway.
For weeks, in the build up to the formal campaign it seemed like the facts will out, that people would make the decision on the basis of information. Being a rational sort, all looked good to me. People would make a well-informed decision. The clear and present shortcomings of the EU are easily articulated. The benefits are harder to pin down monetarily, but are substantial.
The performance of the Remain campaigners has been rather limp and incompetent. The Tory leadership have argued for remaining but you can see their heart is not in it. On other days, those same people would be harsh critics of the organisation they profess to support. The performance of Labour has been pathetic, but worse it has been late in coming. The biggest dent in the voting numbers may be due to Labour failing to take a position in time. Many labour voters have not registered or because of poor leadership, may vote "against the Tories" - but the wrong way.
The behaviour of the Brexiters has been disgraceful, disrespectful, undemocratic and frankly, un-British. With their distortion of fact, the fabrication of arguments, the rabid anti-foreigner rhetoric the Brexiters have adopted campaign slogans and arguments and expressed views that used to be confined to extremist right groups or oddballs like UKIP. No more it seems. The views that, when expressed publicly, meant you were kicked out of office or out of a mainstream party altogether have become mainstream. This is a disgrace and shames our political system.
It's a pretty sorry state of affairs.
So bear with me, and let me summarise the main issues from my own personal perspective. If you don't know what the issues are, look here here for an example list: http://www.bbc.co.uk/news/uk-politics-eu-referendum-36027205
I'll highlight some of the facts and misrepresentations. These figures come from the BBC. Some people might argue the BBC is biased. I think the BBC, with all it's own faults (some of which mirror the EU's) is, when all else is considered - a rock.
The costs and economics of leaving. The gross contribution in 2015 was £17.8bn but the UK rebate was worth £4.9bn. £4.4bn was also paid back to the UK government for farm subsidies and other programmes. The net or actual payment is 8.5bn per year which amounts to 163m per week. Certainly not the 350bn quoted by Brexit. You can see in the BBC page, that Brexit argue that this net payment can be used to fund other things (but the Tories have shown no enthusiasm for this in the past). They have decided to lie, and everyone knows the bigger the lie, the more likely it is to be believed.
The net loss to the economy of disturbing, let alone leaving, the single market outweighs our payments many times over. In the period 30 May to 13 June, the stock market lost more than 300 billion pounds in value. The improving chances of Brexit caused a major exit from UK shares and Sterling. Shares have recovered as the polls show Remain to be recovering ground again. But given that a large proportion of all of our pensions are invested in the UK stock market, everyone with a pension in the UK suddenly became poorer - we're talking thousands of pounds poorer - at a stroke. Consider the numbers. Investors were 300 billion pounds poorer in just two weeks. Compare that with the COST of membership of 8.5 billion pounds.
We'd be INSANE to leave the UK. Of all the countries inside and outside the EU, only Russia and Donald Trump think Brexit is a good idea to vandalise our economy in this way.
The economic argument is one perspective. The statistics used to make a case for leaving across all of the issues are, when the rhetoric is peeled back, of only marginal importance.
Migration is the Brexiters trump card supposedly. If we leave the EU, and refuse to trade with EU countries on the current EU terms, much of our trade with Europe will be suspended. Typically, the largest companies exporting aerospace technologies, other high end manufacturing and services at scale could be forced to adopt emergency plans. Hundreds or thousands of companies might be affected and go out of business, or choose to base their business elsewhere. Many companies have been making emergency Brexit plans as a precaution. No future government could defend that situation. So we would have to do business with the EU on their terms. Terms which include free movement of people.
So leaving the EU would cause a chilling effect on our economy - it would be affected adversely but no on knows by how much, but make no difference to the numbers of EU migrants. To think differently is fantasy.
The so called loss of sovereignty doesn't stand up to much scrutiny. Whenever, we join any organisation - we hope to benefit by being members, and we lose a little sovereignty as a consequence. What have we lost so far? Only a few percent of UK laws derive from the EU. our most important ones (e.g. common law, tax, criminal, defence etc.) are unaffected. EU laws are almost entirely focused on protecting workers rights. These rights benefit all citizens of EU states. The only people who would benefit from losing these rights are owners of companies who wish to run companies along, what to most people would appear to be, Dickensian lines. Cruel, crooked employers who currently outsource work to other countries anyway. (Check out the Brexit employer supporter backgrounds).
These laws protect your rights. Why would you want to lose them? The EU, with our involvement and agreement define these laws. The UK judiciary applies them. The European Court of Justice exists to resolve inter-member disputes, or gross breaches. Doesn't that seem reasonable?
The same case for remaining in the EU can be made across all of the issues. Common sense says, it would be a foolish thing to do.
Enough - the facts don't provide much support to change our status in the EU. Michael Gove's astonishing suggestion that we can't trust 'experts' or anyone with an opinion whose view do not align with his are a corruption. People who our government employ to conduct research, act as guardians of our laws and economy are no longer to be trusted. The Bank of England is not to be trusted. The Institute of Fiscal Studies, not to be trusted. The IMF, not to be trusted. Our trading partners, inside and outside the EU - are not to be trusted.
Who the hell can we trust then?
Apparently, we can trust Newt Faced Loser Gove - most hated, incompetent and eventually fired Education Secretary. Or swivel-eyed over-ambitious Boris, known to be economic with the truth, always good for a quote or a laugh, rarely answering a direct question. Or most scary of all, Nigel Farage. The foreigner-hating, bigoted fruitcake, who is still under a cloud for fiddling his expenses whilst wasting the time and losing the good will of EU MPs.
Perhaps we can just trust the Tories? Those pillars of society, so desperate to get in to power they break the election rules in marginal seats to influence undecided voters with battle buses, parachuited-in ministers and creepy bullying activists. As for Labour - give me a break.
All in all, it's a rather public and embarrassing performance on all sides. Countries in and outside the EU look on, perplexed that we should so publicly exhibit the worst of our politics and nature and risk making the biggest mistake in generations.
For heavens sake, use your COMMON SENSE and vote to REMAIN, stop this madness and let's get down to sensible life again.
Paul Gerrard My linkedin profile is here My Mastodon Account