Testing the Code that Checks Code
First published 22/09/2017
Twitter is a lousy medium for debate, don't you think?
I had a very brief exchange with Michael Bolton below. (Others have commented on this thread this afternoon). To my apparently contradictory (and possibly stupid) comment, Michael responded with a presumably perplexed “?”
This blog is a partial explanation of what I said, and why I said it. You might call it an exercise in pedantry. (Without pedantry, there is less joy in the world – discuss). There's probably a longer debate to be had, but certainly not on Twitter. Copenhagen perhaps, Michael? My response was to 3) Lesson.
I took the lesson tweet out of context and ignored the first two tweets deliberately, and I'll comment on those below. For the third, I also ignored the “don't blindly trust your test code” aspect too and here's why. If you have test code that operates at all, and you have automated checks that operate, you presumably trust the test code already. You will have already done whatever testing of the test code you deemed appropriate. I was more concerned with the second aspect. Don't blindly trust the checks.3) Lesson: don't blindly trust ... your automated checks, lest they fail to reveal important problems in production code.
But you know what? My goal with automation is exactly that – to blindly trust automated checks.
If you have an automated check that runs at all, then given the same operating environment, test data, software versions, configuration and so on, you would hardly expect the repeated check to reveal anything new, unless you'd want to research its intricacies manually and embark on trying to find newer resolutions, through debate and comparisons; for example, the grafana vs kibana comparison. If it did 'fail', then it really ought to flag some kind of alarm. If you are not paying attention or are ignoring the alarm, then on your own head be it. But if I have to be paying attention all the time, effectively babysitting – then my automation is failing. It is failing to replace my manual labour (often the justification for automating in the first place).
A single check is most likely to be run as part of a larger collection of tests, perhaps thousands, so the notification process needs to be integrated with some form of automated interpretation or at least triggered when some pre-defined threshold is exceeded.
Why blindly? Well, we humans are cursed by our own shortcomings. We have low attention-spans, are blind to things we see but aren't paying attention to and of course, we are limited in what we can observe and assimilate anyway. We use tools to replace humans not least because of our poor ability to pay attention.
So I want my automation to act as if I'm not there and to raise alarms in ways that do not require me to be watching at the time. I want my phone to buzz, or my email client to bong, or my chatOps terminal to beep at me. Better still, I want the automation to choose who to notify. I want to be the CC: or BCC: in the message, not necessarily the To: all the time.
I deliberately took an interpretation of Michael's comment that he probably didn't intend. (That's Twitter for you).
When my automated checks run, I don't expect to have to evaluate whether the test code is doing 'the right thing' every time. But over time, things do change – the environment, configuration and software under test – so I need to pay attention to whether these changes impact my test code. Potentially, the check needs adjustment, re-ordering, enhancing, replacing or removing altogether. The time to do this is before the test code is run – during requirements discussions or in collaboration with developers.
I believe this was what Michael intended to highlight: your test code needs to evolve with the system under test and you must pay attention to that.
Now, my response to the tweet suggests rather than babysit your automated checks, you should spend your time more wisely – testing the system in ways your test code cannot (economically).
To the other tweets:
1) What would you discover if you submitted your check code to the same review, scrutiny, and testing as your production code?Test code can be trivial, but can sometimes be more complex than the system under test. It's the old old story, “who tests the test code and how?”. I have worked on a a few projects where test code was treated like any other code. High integrity projects and the like. but even then I didn't see much 'test the test code' activity. I'd say there are some common factors that make it less likely you would test your test code, and feel safe (enough) not doing so.2) If you're not scrutinizing test code, why do you trust it any more than your production code? Especially when no problems are reported?
- Test code is built incrementally, usually, so that it is 'tried' in isolation. Your test code might simulate a web or mobile transaction, for example? if you can watch it move to fields, enter data, check the outcomes you see correctly, most testers would be satisfied it works as a simple check. What other test is required than re-running it, expecting the same outcome each time?
- Where the check is data-driven, of course, the code uses prepared data to fill, click or check parameterised fields, buttons and outcomes respectively. On a GUI app this can be visibly checked. Should you try invalid data (not included in your planned test data) and so on? Why bother? If the test code fails, then that is notification enough that you screwed up – fix it. If the test code flags false negatives when for example your environment changes, then you have a choice: tidy up your environment, or add code to accommodate acceptable environmental variations.
- Now, when your test code loses synchronisation or encounters a real mismatch of outcomes, your code needs handlers for these situations. These handlers might be custom-built for every check (an expensive solution) or utilise system-wide procedures to log, recover, re-start or hand-off depending on the nature of the tests or failures. This ought to be where your framework or scaffolding code comes in.
- Surely the test code needs testing more than just 'using it'? The thing is, your test code is not handed over to users for them to enter extreme, dubious or poor quality data. All the data it will ever handle is in the test suite you use to test the system under test. Another tester might add new rows of test data to feed it, but problems arising are as likely to be due to other things than new test data. At any rate, what tests would you apply to your test code? Your test data, selected to exercise extremes in your system under test is probably quite well suited to testing the test code anyway.
- When problems do arise when your test code is run, it is more likely to be caused by environmental/data problems or software changes, so your test code will be adapted in parallel with these changes, or made more resilient to variations (bearing in mind the original purpose of the test code).
- Your scaffolding code or home-grown test frameworks handles this doesn't it? Pretty much the same arguments above apply. They are likely to be made more robust through use, evolution and adaptation than a lot of planned tests.
- Who tests the tests? Who tests the tests of the tests? Who tests the tests of ...
I'm not suggesting that under all circumstances, your test code doesn't need testing. But it seems to me, that in most situations, the code that actually performs a check might be tested by your test data well enough and that most exceptions arise as you develop and refine that code and can be fixed before you come to rely on it.
Tags: #testautomation #automation #advancingtesting #automatedregression
Paul Gerrard My linkedin profile is here My Mastodon Account