Anti-Regression Approaches: Impact Analysis and Regression Testing Compared and Combined – Part I: Introduction and Impact Analysis
First published 14/04/2010
Introduction
For some years, I’ve avoided getting too involved in test execution automation because I’ve felt it was boring. Yes, I know it has great promise and in principle, surely we should be offloading the tedious, repetitive, clerical tasks to tools. But I think the principles of regression testing haven’t changed and we’ve made little progress in the last fifteen years or so – we haven’t really moved on. It’s stuck in a time-warp.
I presented a talk on test automation at Eurostar 1997. Titled “Testing GUI Applications”, the paper I wrote is still the most popular one on my website gerrardconsulting.com with around 300 downloads a month. Why are people still interested in stuff I wrote so long ago? I think it was a good, but not groundbreaking paper; it didn’t mention the web; the recommendations for test automation were sensible, not radical. Books on the subject have been written since then. I’ve been meaning to update the paper for the new, connected world we now inhabit, but haven’t had the time so far.
But at the January 2010 Test Management Summit, I chose to Facilitate the session, “Regression Testing: What to Automate and How”. In our build-up to the Summit the topic came top of the popularity survey. We had to incorporate it into the programme, but no one volunteered – so I picked it up. On the day, the frustrations I’ve held for a long time came pouring out and the talk became a rather angry rant. In this series of articles, I want to set out the thoughts I presented at the Summit.
I’m going to re-trace our early steps to automation and try and figure out why we (the testers) are still finding that building and running sustainable, meaningful automated regression test suites is fraught with difficulties. Perhaps these difficulties arise because we didn’t think it through at the beginning?
Regression tests are the most likely to be stable and run repeatedly so automation promises big time savings and, being automated, guarantees reliable execution and results checking. The automation choice seems clear. But hold on a minute!
We regression test because things change. Chaotic, unstable environments need regression testing the most. But when things change regularly, automation is very hard. And this describes one of the paradoxes of testing. The development environments that need and would benefit from automated regression testing are the environments that find it hardest to implement.
Rethinking Regression
What is the regression testing thought process? In this paper, I want to step through the thinking associated with regression testing. To understand why regressions occur, to establish what we mean by regression testing, why we choose to do it and automate it.
How do regressions occur?
Essentially, something changes and this impacts ‘working’ software. This could be the environment in which the software operates, an enhancement is implemented or a bug is fixed (and the change causes side-effects) and so on. It’s been said over many years that software code fixes have a 50% chance of introducing side-effects in working software. Is it 30% or 80%? Who cares? Change is dangerous; the probability of disaster is unpredictable; we have all suffered over the years.
Regressions have a disproportionate impact on rework effort, confidence and even morale. What can we do? The two approaches at our disposal are impact analysis (to support sensible design choices) to prevent regressions and regression testing – to identify regressions when they occur.
Impact Analysis
In assessing the potential damage that change can cause, the obvious choice is to not change anything at all. This isn’t as stupid a statement as it sounds. Occasionally, the value of making a change, fixing a bug, adding a feature is far outweighed by the risk of introducing new, unpredictable problems. All prospective changes need to be assessed for their potential impact on existing code and the likelihood of introducing unwanted side-effects. The problem is that assessing the risk of change – Impact Analysis – is extremely difficult. There are two viewpoints for impact analysis: The business view and the technical view.
Impact Analysis: Business View
The first is the user or business view: the prospective changes are examined to see whether they will impact the functionality of the system in ways that the user can recognise and approve of. Three types of functionality impact are common: business- or data- or process-impacted functionality.
Business-impacts often cause subtle changes in the behaviour of systems. An example might be where a change affects how a piece of data is interpreted: the price of an asset might be calculated dynamically rather than fixed for the lifetime of the asset. An asset stored at a location at one price, might be moved to another location at another price. Suddenly – the value of the non-existent asset at the first location is positive or even negative! How can that be? The software worked perfectly – but the business impact wasn’t thought through.
A typical data-impact would be where a data item required to complete a transaction is made mandatory, rather than optional. It may be that the current users rely on the data item being optional because at the time they execute the affected transaction the information is not known, but captured later. The ‘enhanced’ system might stop all business transactions going ahead or force the users to invent data to bypass the data validation check. Either way, the impact is negative.
Process-impacted functionality is where a change might affect the choices of paths through the system or through the business process itself. The change might for example cause a downstream system feature to be invoked where before it was not. Alternatively, a change might suppress the use of a feature that users were familiar with. Users might find they have to do unnecessary work or they have lost the opportunity to make some essential adjustment to a transaction. Wrong!
Impact Analysis: Technical View
With regards to the technical impact analysis by designers or programmers – there are a range of possibilities and in some technical environments, there are tools that can help. Very broadly, impact analysis is performed at two levels: top-down and bottom-up.
The top down analysis involves the consideration of the alternate design options and looking at their impact on the overall behaviour of the changed system. To fix a bug, enhance the functionality or meet some new or changed requirement, there may be alternative change designs to achieve these goals. A top-down approach looks at these prospective changes in the context of the architecture as a whole, the design principles and the practicalities of making the changes themselves. This approach requires that the designers/developers have an architectural view, but also a set of design principles or guidelines that steer designers away from bad practices. Unfortunately, few organisations have such a view or have design principles so embedded within their teams that they can rely on them.
The bottom-up analysis is code-driven. If the selected design approach impacts a known set of components that will change, the software that calls and depends upon the to-be-changed components can be traced. The higher-level services and features that ultimately depend on the changes can be identified and assessed. This sounds good in principle, especially if you have tools to generate call-trees and collaboration diagrams from code. But there are two common problems here.
The first problem is that the design integrity of the system as a whole may be poor. The dependencies between changed components and those affected by the changes may be numerous. If the code is badly structured, convoluted and looks like ‘spaghetti’, even the software experts may not be able to fathom this complexity and it can seem as though every part of the system is affected. This is a scary prospect.
The second problem is that the software changes may be at such a low level in the hierarchy of calling components that it is impractical to trace the impact of changes through to the higher level features. Although a changed component may be buried deep in the architecture, the effect of a poorly implemented software change may be catastrophic. You may know that a higher level service depends on a lower level component – the problem is, you cannot figure out what that dependency is to predict and assess the impact of the proposed change.
All in all, Impact analysis is a tricky prospect. Can regression testing get us out of this hole?
To be continued...
Paul Gerrard
29 March 2010.
Tags: #testautomation #regressiontesting #impactanalysis
Paul Gerrard My linkedin profile is here My Mastodon Account