Test Engineering Blogs

Reader

Read the latest posts from Test Engineering Blogs.

from Paul Gerrard

https://youtu.be/a61wKNUbDhY?si=cdv1HJhGk7gGuNub

Video Abstract

Does your working software sometimes stop working when changes are introduced?

Are your developers unable to impact analyze changes so unwanted side effects get released?

What measures are you taking to reduce the number of software regression errors?

Now, even though it's expensive and doesn't prevent regressions, most companies use system level regression testing.

You could say this is a last resort and in some ways the least effective anti regression measure we can take.

Let's look more closely at what regressions are, why they occur, and which anti regression measures are available to us.

Overview

I want to talk about software regressions and why regressions occur. If software regressions are the enemy, we want to prevent them as well as find them. Now, there are several options and we should consider all of them depending on our circumstances.

We need to know how regressions occur and why and take measures to prevent them as much as possible. So let's explore what a regression is.

What is a software regression?

One definition would be

“an unintended side effect or bug created or activated when changes are made to … something”

... and we’ll look what that something is.

There are several causes of software regressions and there are some variations of these too.

Causes of Regressions

Obviously code changes are a big concern. The most common cause of regressions is when developers modify existing code. Code changes often, unintentionally affect the behavior of other parts of the system.

But there are also environment changes that can cause problems.

Environment Changes

For example, hardware and operating system and other software upgrades can cause previously stable software to fail.

Updates or changes to third party libraries, APIs or services that your software relies on can introduce regressions.

These third parties could be partnering organizations or divisions in your own company.

Lack of Technical Understanding

When teams do not share adequate knowledge or understanding of the system's overall architecture, or if there is a lack of communication between different development teams, regressions are more likely to occur.

Older code bases usually lack clear documentation. All the experts may be long gone.

  • The knowledge and understanding of the original design choices are poor so architects, designers and developers make mistakes.

Code maintenance becomes very risky because no one has time to really analyze code to understand it.

And without that understanding, it becomes difficult to predict the impacts of change.

Developer Testing

It could be that development testing, if not thorough, can result missed regressions.

Developers are adopting better testing practices, including test first and TDD, but it's a slow process.

The big question is how can we avoid software regressions? There are several well established approaches.

More Effective Impact Analysis

The obvious one is to perform more effective impact analyses. But impact analysis is difficult and it's never going to be 100% reliable.

Requirements Impact Analysis

At the requirements level, we need to track requirements changes to understand the impact on other requirements.

Code Level Impact Analysis

At the code level, we have to trace code changes to understand the impact on other code.

Environment Impact Analysis

We should also evaluate impact of environmental changes on our systems too.

Now, all these measures sound great in principle. The problem is, they can be extremely difficult to apply in practice.

But there are other practices that help.

Anti-Regression Measures

Test-First Approaches

The first is test-first development.

Now, test-first implies the whole team think about testing before both new development and changes, whether due to requirements or bug reports.

Test-driven-development or TDD means developers write tests before writing code, and when done properly, means software changes incrementally in an always-tested state.

In continuous delivery environments, TDD is easier to apply and very effective at reducing regressions in later test stages.

We should not forget that test-first includes testing requirements and testing requirements is a powerful approach.

For example, if you write gherkin stories, creating feature scenarios not only helps testing, it can help the whole team to recognise and understand impacts.

Tracing language and feature changes across requirements gives some insight into impact too.

The use and reuse of data across requirements can point you in the right direction to find other impacts.

CI/CD Disciplines

Continuous Integration Continuous Deployment – CICD – pipelines allow automated testing to run every time new code is pushed to the code base.

Continuous approaches extend the test-first concept. Test first becomes test ALWAYS.

This is why continuous delivery helps identify issues early to keep the software in a deployable state.

Code Review

Regular code reviews can help catch potential problems and prevent regressions because code changes are critically examined from new perspectives.

Now, tools can scan code in isolation, but developers and architects – humans – can look at code in the context of interfacing components too.

In this way, interfaces and collaborating components are examined more closely to find inconsistencies and impacts.

Refactoring

Regular refactoring improves code readability, maintainability and developer understanding and this reduces regressions too.

Refactoring is an essential stage of test driven development.

The TDD mantra is RED, GREEN, REFACTOR in all TDD cycles.

Refactoring is a critical stage in the TDD cycle. Refactoring should not be an afterthought, but refactoring is too often neglected if time is tight.

Version Control Discipline

Good version control practices eliminate some types of regressions.

Developers are well used to version control tools such as GIT. But version control in continuous and DevOps environments requires discipline by BOTH developers and software managers.

Good version control practices not only reduce regressions, but also version control tools are an invaluable aid to tracing the troublesome code that causes a regression failures.

Feature Flags

Feature flags allow you to enable or disable specific features in code dynamically.

In test or production, new or changed features can be released to a specific environment or selected users.

If there are problems, the features can be withdrawn or turned off.

This doesn’t reduce regression, but it can reduce the impact of regression failures.

So, with care, we can extend some testing into production.

Test Coverage Thorough the Life-cycle

Finally, comprehensive test coverage helps. If your entire system is covered by tests at all stages, if these tests are repeatable, and repeated, this has to help the battle against regressions.

But this is an unachievable ideal in many situations.

Continuous delivery and test approaches are probably the least painful way of making progress.

System and User Regression Testing

System end to end and user tests are a different matter. The problem is, they are often too slow and too late to keep pace with continuous development cycles.

Summary

In summary, by adopting a mix of these measures, you could significantly reduce the chances of regressions.

They definitely help to ensure new updates and changes improve the software without breaking existing functionality.

But there's no prizes for guessing the elephant in the room.

Automated Regression Testing

What is the role of automated system and user regression testing?

Automated regression testing is a very effective way to catch regressions after they occur, but they are not a prevention measure, unless tests are part of a TDD cycle.

System and user regression tests are at best, a partial safety net and may help, when other measures fail.

The problem is that often, impact analysis in particular, is difficult and can be expensive. So regression prevention is not considered to be economic – it’s not looked at closely enough or acted upon.

For many years, late regression testing is the only approach used to prevent regressions reaching end-users.

As a consequence, the test automation tool market has grown dramatically.

The problem is these tools support a costly, less effective approach to regression prevention.

Late regression testing is a last resort, but test execution tools are the go-to solution to regression problems.

We'll talk more about automated regression testing in the next video.

 
Read more...

from Paul Gerrard

14 Nov 2023

Introduction

A long time ago in a place far far away ... I got involved with an initiative to create a software testing standard eventually published as British Standard 7925.

You can download the Draft Standard(s) from here.

These are some recollections of the years 1993 to 1998. I make no promise to get any of these memories 100% correct. (I haven't consulted any of the other participants in the story; I am writing this essay on impulse). They are mostly impressions, and impressions can be wrong. But I was there and involved at least.

As the Standard explains, this initiative started in January 1989 with a Standards Working Party (SWP) formed from members of the Specialist Interest Group in Software Testing (SIGIST, now BCS SIGIST). I don't know the detail of what actually happened but, in July 1992, a draft document was produced, and after some use of it in a few places, no one seemed very happy with it.

After a period, some of the members of the SWP decided to have another attempt. There were invites to get others involved through the SIGIST network and newsletter, and I was one of the 'new intake' to the SWP.

A New Start

In January 1993, a reformed SWP re-started from scratch, but retained the existing document as a source – some of it might be reusable, but the scope, overall direction and structure of the document needed a re-think.

PA Consulting generously offered the SWP free use of one of their meeting rooms in a plush office in Victoria, London, and provided nice coffee and generous bowls of fresh fruit, I recall. The number of participants in the group varied over time, averaging 10-12, I guess. Our monthly meetings typically had 8-10 people involved.

My old friend, Stuart Reid of Cranfield University at the time, led the effort – he had experience of working with standards bodies before. Other members (who I would name if I could remember them all) worked for organisations including National Physical Laboratory, Safety-Critical Systems Club, IPL, Praxis. I was a consultant at Systeme Evolutif a UK testing services company – which became Gerrard Consulting some years later. And so on.

I recall I was in a minority – I was one of a small number of consultants working in middle of the road IT at the time, and not safety-critical or high integrity systems. We were sometimes at odds with the other SWP-ers, but we learnt a lot in the meantime. We got on pretty well.

Early Direction

The original goal of the SWP was to come up with consistent, useful definitions of the main test design techniques that had been described in books like Glenford J Myers' Art of Software Testing – 1st Edition and Boris Beizer's Software Test Techniques.

At the time, there were very few books on software testing, although there were academics who had published widely on (mostly) structural test techniques such as statement, branch/decision testing and more stringent code-based approaches, as well as functional techniques such as combinatorial-, logic-, state- and X-Machine-based test design.

The descriptions of these techniques were sometimes a little imprecise and in conflict at times. So our plan was to produce a set of consistent descriptions of these.

From the eventual standard:

“The most important attribute of this Standard is that it must be possible to say whether or not it has been followed in a particular case (i.e. it must be auditable). The Standard therefore also includes the concept of measuring testing which has been done for a component as well as the assessment of whether testing met defined targets.”

Now this might not have been at the forefront of our minds at the start. Surely, the goal of the standard is to improve the testing people do? At any rate, the people who knew about standards got their way.

It seemed obvious that we would have to define some kind of process to give a context to the activity of test design so we decided to look only at component-level testing and leave integration, system and acceptance testing to another standards efforts. We would set the path for the other test phase standards and leave it there. (Yes, we really thought for a while we could standardise ALL testing!)

Some Progress, with Difficulty

So, we limited the scope of the standard to unit, component, program, module, class testing. Of course we had to define what a component was first. Our first challenge. We came up with:

Component: “A minimal software item for which a separate specification is available”

and:

Component testing: “The testing of individual software components. After [IEEE]”

That was kind of easy enough.

After some debate, we settled on a structure for the standard. We would define a Component Test Process, introduce each functional and structural test technique in two dimensions: as a technique for test design and as a technique for measurement (i.e. coverage).

All we had to do now was write the content, didn't we?

Difficulties and the Glossary

I think at that point, all our work was still ahead of us. We made some progress on content but got into long and sometimes heated debates. The points of dispute were often minor, pedantic details. These were important, but in principle, should have been easy to resolve. But no. it seemed that, of the seven or eight people in the room, two or three were disgruntled most of the time. We took turns to compromise and be annoyed, I guess.

From my perspective, I thought mostly, we were arguing over differing interpretations of terms and concepts. We lacked an agreed set of definitions and that was a problem, for sure.

In a lull in proceedings, I made a proposal. I would take the existing content, import it into a MS Access database, write a little code, and scan it for all the one, two, three and four word phrases. I would pick out those phrases that looked like they needed a definition and present the list at the next meeting. It took me about a day to do this. There were about 10,000 phrases in our text. I picked out 200 or so to define, presented them and the group seemed content to agree definitions as and when the conversation used a term without one. This seemed to defuse the situation and we made good progress without the arguing.

This was probably my most significant contribution. The resulting Glossary became BS 7925-1 (small fanfare please).

https://drive.google.com/file/d/1dVufDi4ru5mU7jF_Fy-T4wkn7_s-o4G6/view?usp=drive_link

Standard First Page

Process, Damned Process

The process was not so hotly debated, but why damned? Some of the group thought it would be uncontroversial to define a process for Component Testing. But some of us (the middle-of-the-road brigade) had fears that a process, no matter how gently defined and proposed, could trigger negative responses from practitioners to reject the standard out of hand.

We adopted a minimalist approach – there is little detail in the process section except a sequential series of activities with multiple feedback loops. There is no mention of writing code or amending code to fix bugs. It just explains the sequence and possible cycles of the five test activities.

It caused upset all the same – you can't please all the people all the time, and so on. But see my views on it later.

Finishing the Drafts

I have to say, once the structure and scope of the of the Standard was defined, and the Glossary definition process underway, I stepped back somewhat. I'm not a good completer-finisher. I reviewed content but didn't add a lot of material to the final draft. The group did well to crank out a lot of the detail and iron out the wrinkles.

Submission to the British Standards Institute

Stuart managed the transition from our SWP to the standards body. Trusting that standards can be expensive to purchase, once published, we created a final draft and made it publicly available for free download. That final draft went to the BSI.

The SWP wrote the standard. The BSI appointed a committee (IST/15) to review and accept it.

Committees responsible

You can see the involved parties are the sort of august organisations who might implement a Component Test Standard.

Legacy

It's just over 25 years since the standard was published on 15 August, 1998.

In April 2001, I set up a website (http://testingstandards.co.uk) which hosted downloadable copies of the (SWP Draft) standards. The site was managed by some colleagues from the SWP and others interested in testing standards at the time. Some progress was made (in some non-functional testing) but that work kind of fizzled out after a few years. I keep the website running for sentimental reasons, I suppose. It is neglected, out of date and most of the external links don't work. but the download links above do work and will continue to do so.

BS 7925-1 and -2 have been absorbed into/replaced by ISO 29119. The more recent standard 29119 caused quite a stir and there was an organised resistance to it. I fully agree that standards should be freely available but the response to it from people who are not interested or believe in standards was rather extreme and often very unpleasant.

Let standards be, I say. If people who see value in them want to use them, let them do so. Free country and all that. I don't see value in large overarching standards myself, I must say, but I don't see them as the work of the devil.

Anyway.

BS 7925 – a Retrospective

BS 7925-1 – “Vocabulary” – provided a glossary of some 216 testing terms used in the Component Testing Standard.

Vocabulary

Skimming the glossary now, I think the majority of definitions are not so bad. Of course, they don't mention technology and some terms might benefit from a bit of refinement, but they are at least (and in contrast to other glossaries) consistent.

BS 7925-2 – “Component Testing” – provided a process and definitions of the test design techniques.

BS7925-2 Component Testing

Component Test process

The final draft, produced by the SWP and the eventual standard contained some guidelines for the process that place it squarely in the context of a waterfall or staged development approach. At the time Extreme Programming (1999) and the Agile Manifesto (2001) lay in the future.

Does this mean the process was unusable in a modern context? Read the process definition a little more closely and I think it could fit a modern approach quite well.

Component Test Process

Forget the 'document this' and 'document that' statements in the description. A company wide component test strategy is nonsense for an agile team with two developers. But shouldn't the devs agree some ground rules for testing before they commit to TDD or a Test-First approach? These and other statements of intent and convention could easily be captured in some comments in your test code.

Kent Beck (who wrote the seminal book on TDD) admits he didn't invent the test-first approach but discovered it in an ancient programming book in the late 1990s. Michael Bolton helpfully provides an example – perhaps the first(?) – description of Test-First as an approach here.

We designed the component test process to assume a test-first approach was in place and it is an iterative process too – the feedback loops allow for both. Test-First might mean writing the entirety of a test plan before writing the entirety of the component code. But it could just as easily refer to a single check exercising the next line of code to be added.

So I would argue that with very little compromise, it supports TDD and automated testing. Not that TDD is regarded as a 'complete' component test approach. Rather, that the TDD method could be used to help to achieve required levels of functional and structural coverage, using good tools (such as a unit test framework and code coverage analyser).

I also appreciate that for many developers it doesn't add anything to a process they might already know by heart.

Test Techniques

Regarding the technique definitions, pretty much they stand as a concise, and coherent set of descriptions. Annex B provides some good worked examples of the techniques in use. Of course, there are deeper/academic treatments of some, perhaps all of these techniques. For example, Kaner, Radmanabhan and Hoffman wrote the 470 page “Domain Testing Workbook” – a generalisation of the equivalence partitioning and boundary value techniques in 2003.

If you want a description of techniques that presents them in a way that emphasises their similarity as examples of the concept of model-based test design and measurement, you would do well to find a better summary.

In Conclusion

This blog has been a trip down a (fallible) memory lane describing events that occurred from 34 to 25 years ago.

I am not suggesting the standard be revived, reinstated, brought up to date, or ever used. But I wanted to emphasise two things in this essay:

The BS 7925 Standards effort was substantial – overall, it took nine years from idea to completion. The second, more productive phase, took about 4 years. I am guessing that effort was between 300 and 500 man days for the SWP alone. People gave their time in good faith and for no reward.

The challenges of programming and testing have not changed that much. The thinking, debate, ideals aspired to and compromises made in the late 1990s to create a component test standard are much the same.

Even so, do not think that creating a workable Component Test Standard for modern environments would be a simple task.

 
Read more...

from Paul Gerrard

Alan Julien sent me a Linkedin message asking me to consider his LinkedIn post and the many comments that resulted.

Being the lazy person I am, and seeing there were more than two hundred comments, I copied the post and as many comments I could get onscreen, and gave it to ChatGPT and asked for a summary. Here is that summary.


Summary of the Debate on Software Testing Terminology: Manual vs. Automated Testing

The discussion, initiated by Alan Julien, critiques the terms manual and automated testing as misrepresentative of the intellectual and strategic nature of software testing. Many industry professionals support the idea that these labels oversimplify the discipline and propose alternative terminology to better reflect the depth of software testing.

Key Themes in the Debate:

1. The Misconception of “Manual Testing”

  • Many argue that manual testing implies repetitive, low-skill work when, in reality, it involves critical thinking, analysis, investigation, and risk assessment.
  • Testers engage in exploratory testing, problem-solving, and strategic planning, making the term “manual” misleading.
  • Several professionals note that testing has never been purely “manual”—tools have always assisted testing efforts.

2. “Automated Testing” is Not Fully Automated

  • The term automated testing suggests that testing can run independently of human intervention, which is not accurate.
  • Automation requires human creativity, scripting, maintenance, and analysis to be effective.
  • Many argue that “automated testing” should more accurately be called automated test execution since testing itself is a cognitive task.

3. Historical Origins of the Terms

  • Some trace the distinction to early test automation tool vendors (such as Mercury and Segue) who promoted their products by contrasting automation with “manual” testing.
  • The terminology was commercially driven and stuck due to marketing, HR practices, and job descriptions.

4. The Perceived Value and Pay Disparity

  • Automation roles are often paid higher than those labeled as “manual testers,” reinforcing the misconception that automation is superior.
  • This devalues the role of exploratory, risk-based, and analytical testing, which is essential for identifying critical defects.

5. Alternative Terminology Proposed

  • Many contributors suggest renaming these categories to emphasize cognitive and strategic aspects rather than execution methods:
    • “Exploratory Testing” vs. “Scripted Testing”
    • “Human-led Testing” vs. “Tool-assisted Testing”
    • “Feature Testing” vs. “Regression Testing”
    • “Cognitive Testing” vs. “Autonomous Testing”
    • “Augmented Testing” (human + AI collaboration)
    • “Experiential Testing” vs. “Programmatic Testing”
  • Some propose removing the labels entirely and just calling it testing, allowing for a focus on methodology rather than execution style.

6. Impact of AI on the Terminology Debate

  • AI-driven testing blurs the line further, making distinctions between “manual” and “automated” even less relevant.
  • Some advocate for terms like AI-assisted testing, rather than maintaining old binaries.

Conclusion: The Need for Change

  • The consensus is that the terms manual and automated testing are outdated, misleading, and limit the profession.
  • Industry professionals advocate for a shift in language to better reflect the intellectual and strategic contributions of testers.
  • The challenge remains in changing ingrained industry terminology, but many believe it is a battle worth fighting.

This debate highlights a broader effort to ensure software testing is recognized as a skilled, intellectual profession rather than just an execution-based task.


I agree with the summary above and that the terms Manual V Automated aren’t helpful.

For some years – maybe fifteen – I have advocated we look at testing from two standpoints: – How we think about testing – How we choose to manage the logistics of testing

You could say thinking is the strategy and logistics are tactics, but I favour a purist attitude. That IF (and only if) we separate the thinking from the practical aspects of how we strategise, prioritise, design, select, hypothesise our tests, and review the outcomes of tests after the fact, we can deal with the logistics in a more sensible way.

To me, a test is designed by a human. (Maybe the human uses a tool, Word, Excel, code e.g. Python... to help). Now this test could be an idea or a 50 page document describing an immensely complex procedure to implement an end-to-end test in a very complex environment. I don’t care. The thought process (not the thoughts themselves) is the same – the content and scale can be very different, obviously.

Whether we execute a test with tools or ‘manually’ or by some other chosen piece of magic is irrelevant, if we consider the thought process as universal.

How we implement/execute a test – preparing environments, obtain/configure test data, run tests, validate results, analyse the outputs or results data, cleanup environments and so on are logistical choices we can make. We might do some of these tasks without tools or with tools, with tools performing some or all the logistics.

Some tasks can only be done 'manually' – that is, using our brains, pencil and paper, whiteboard, or other aids to capture our ideas, test cases even. Or we might keep all that information in our heads. Other tasks can only be performed with tools – every environment, application, stakeholder, goals and risk profiles are different, so we need to make choices on how we actually ‘make the tests happen’.

Some tests, for example APIs might be executed using a browser, typing URLs into the search bar, or code, or a dedicated tool. All these approached require technology. But the browser is simply the UI we use to access the APIs. Are these tests manual or atomated? It's a spectrum. And our actual approach is our choice. The manual v automated label blurs the situation. But it's logistics that are the issue, not testing as a whole.

So. I believe there is – at some level of abstraction, and with wide variation – a perspicacious thought process for all tests. The choices we make for logistics varies widely depending on our context. You might call this a ‘context-driven approach’. (I wouldn’t, as I believe all testing is context-driven). You might not ‘like’ some contexts or the approaches often chosen in those contexts. I don’t care – exploratory v scripted testing is partly a subjective, and partly a contractual and/or a cultural call (if you include contractual obligations, or organisational culture in your context. Which obviously, I believe you should).

I use the model to explain why, for example, test execution is not all testing (automated execution isn't the be all and end all). I use the model to explain the nuanced difference between improvised testing and pre-scripted testing. (All testing is exploratory is an assumption of the New Model). I use the model to highlight the importance of 'sources of knowledge', 'challenging requirements', 'test modelling' and other aspects of testing that are hard to explain in other ways.

Like all models, the New Model is wrong, but I believe, useful.

If you want to know more about the challenges of testing terminology and the New Model thought process – take a look at my videos here:

Intro to the Testing Glossary Project

The New Model for Testing explains thinking and logistics

 
Read more...

from Paul Gerrard

First published 10/06/2014

At the Unicom NextGen Testing show on 26th June, (http://www.next-generation-testing.com/) I'll be pitching some ideas for where the testing world is going – in around 15 minutes. I thought I'd lay the groundwork for that short talk with a blog that sets out what I've been working on for the past few months. These things might not all be new in the industry, but I think they will become increasingly important to the testing community.

There are four areas I've been working on in between travelling, conferences and teaching.

Testers and Programming

I've been promoting the idea of testers learning to write code (or at least to become more technical) for some time. In February this year, I wrote an article for the testing Planet: 'The Testers and Coding Debate: Can We Move on Now?' That suggested we 'move on' and those testers who want to should find learning code an advantage. It stirred up a lively debate so it seems that the debate is not yet over. No one is suggesting that learning how to write code should be compulsory, and no one is suggesting that testers become programmers.

My argument is this: for the investment of time and effort required, learning how to write some simple code in some language will give you a skill that you might be able to use to write your own tools, but more importantly, the confidence and vocabulary to have more insightful discussions with developers. Oh, and by the way, it will probably make you a better tester because you will have some insider knowledge on how programmers work (although some seem to disagree with that statement).

Anyway, I have taken the notion further and proposed a roadmap or framework for a programming training course for testers. Check this out: http://gerrardconsulting.com/?q=node/642

Lean Python

Now, my intention all along in the testers and programming debate was to see if I could create a Python programming course that would be of value for testers. I've been a Python programmer for about five years and believe that it really is the best language I've used for development and for testing. So, I discussed with my Tieturi friends in Finland, the possibility of running such a course in Helsinki and I eventually ran it in May.

In creating the materials, I initially thought I'd crank out a ton of powerpoint and some sample Python code and walk the class through examples. But I changed tack almost immediately. I decided to write a Python programming primer in the Pocketbook format and extract content from the book to create the course. I'd be left with a course and a book (that I could give away) to support it. But then almost immediately, I realised two things:

  • Firstly, it was obvious that to write such a short book, I'd have to ruthlessly de-scope much of the language and standard functions, libraries etc.
  • Second – it appeared that in all the Python programming I've done over the last five years, I only ever used a limited sub-set of the language anyway. Result!
And so, I only wrote about the features of the language that I had direct experience.

I have called this Lean Python and the link to the book website is here: http://www.apress.com/gb/book/9781484223840#otherversion=9781484223857

“Big Data” and Testing the Internet of Everything

Last year, I was very interested in the notion of Big Data and the impact it might have on testing and testers. I put together a webinar titled Big Data: What is it and why all the fuss? which you might find interesting. In the webinar, I mentioned something called Test Analytics. I got quite a few requests to explain more about this idea, so I wrote a longer article for Professional Tester Magazine to explain it. you can go to the PT website or you can download the article “Thinking Big: Introducing Test analytics” directly from here.

Now, it quickly occurred to me that I really did not know where all this Big Data was coming from. There were hints from here and there, but it subsequently became apparent that the real tidal wave that is coming is the Internet of Things (also modestly known as the Internet of Everything)

So I started looking into IoT and IoE and how we might possibly test it. I have just completed the second article in a series on Testing the Internet of Everything for the Tea Time with Testers magazine. In parallel with each article, I'm presenting a webinar to tell the story behind each article.

In the articles, I'm exploring what the IoT and IoE are and what we need to start thinking about. I approach this from the point of view of a society that embraces the technology. Then look more closely at the risks we face and finally how we as the IT community in general and the testing community in particular should respond. I'm hopeful that I'll get some kinf of IoE Test Strategy framework out of the exercise.

The first article in the series appears in the March-April edition of the magazine (downloadable here) and is titled, “The Internet of Everything – What is it and how will it affect you”.

There is a video of an accompanying webinar here: The Internet of Everything – What is it and how will it affect you.

A New Model of Testing

Over the past four years, since the 'testing is dead' meme, I've been saying that we need to rethink and re-distribute testing. Talks such as “Will the Test Leaders Stand Up?” are a call to arms. How to Eliminate Manual Feature Checking describes how we can perhaps eliminate, through Redistributed testing, the repetitive, boring and less effective manual feature checking.

It seems like the software development business is changing. It is 'Shifting Left' but this change is not being led by testers. The DevOps, Continuous Delivery, Behaviour-Driven Development advocates are winning their battles and testers may be left out in the cold.

Because the shift-left movement is gathering momentum, Big Data and the Internet of Everything are on the way, I now believe that we need a New Model of Testing. I'm working on this right now. I have presented draftsof the model to audiences in the UK, Finland, Poland and Romainia and the feedback has been really positive.

You can see a rather lengthy introduction to the idea on the EuroSTAR website here. The article is titled: The Pleasure of Exploring, Developing and Testing. I hope you find it interesting and useful. I will publish a blog with the New Model for Testing soon. Watch this space.

That's what's new intesting for me. What's new for you?

Tags: #Python #IOE #IOT #InternetofEverything #programmingfortesters #InternetofThings #NewModelTesting

 
Read more...