First published 03/07/2006
A couple of weeks ago, after the BCS SIGiST meeting I was chatting to Martin Jamieson (of BT) about tools that test 'beneath the GUI'. A while later, he emailed a question...
At the recent SIGIST Johanna Rothman remarked that automation should be done below the level of the GUI. You then stood up and said you're working on a tool to do this. I was explaining to Duncan Brigginshaw (www.odin.co.uk) yesterday that things are much more likely to change at the UI level than at the API level. I gave him your example of programs deliberately changing the html in order to prevent hacking – is that right? However, Duncan tells me that he recommends automating at the UI. He says that the commercial tools have ways of capturing the inputs which shield the tester from future changes e.g. as objects. I think you've had a discussion with Duncan and am just wondering what your views are. Is it necessary to understand the presentation layers for example?
Ulitimately, all GUI interfaces need testing of course. The rendering and presentation of HTML, the execution of Javascript, ActiveX and Java objects obviously need a browser involved for a user to validate their behaviour. But Java/ActiveX can be tested through drivers written by programmers (and many are).
Typically, Javascript isn't directly accessible to GUI tools anyway (as it is typically used for field validation and manipulation, screen formatting and window management). One can write whole applications in JavaScript if you wish.
But note that I'm saying that a browser is essential for a user to validate layout and presentation. If you go down the route of using a tool to automate testing of the entire application from GUI through to server based code, you need quite sophisticated tools, with difficult to use scripting (programming) languages. And lo and behold, to make these tools more usable/accessible to non-programmers, you need tools like AXE to reduce (sometimes dramatically) the complexity of the scripting language required to drive automated tests.
Now, one of the huge benefits of these kinds of testing frameworks, coupled with 'traditional' GUI test tools is they allow less technical testers to create, manage and execute automated tests. But, if you were to buy a Mercury WinRunner or QTP license plus an AXE licence, you'd be paying 6k or 7k PER SEAT, before discounts. This is hugely expensive if you think about what most automated tools are actually used for – compared with a free tool that can execute tests of server-based code directly.
Most automated tools are used to automate regression tests. Full stop. I've hardly ever met a system tester who actually set out to find bugs with tools. (I know 'top US consultants' talk about such people, but they seem to exist as a small minority. What usually happens is that the tester needs to get a regression test together. Manual tests are run, when the software is stable and tests pass, they get handed over to the automation folk. I know, I know, that AXE and tools like it allow testers to create automated test runs. However, with buggy software, you never get past the first run of a new test. So much for running the other 99 using the tool – why bother.
Until you can run the other 99, you don't know whether they'll find bugs anyway. So folk resort to running them manually because you need a human being checking results and anomalies, not a dumb tool. The other angle is that most bugs aren't what you expect – by definition. e.g. checking a calculation result might be useful, but the tab order, screen validation, navigation, window management/consistency, usability and accessibility AREN'T in your preprared test plan anyway. So much for finding bugs proactively using automation. (Although bear in mind that there are free/cheap tools exist to validate HTML, accessibility, navigation and validation).
And after all this, be reminded, the calculated field is actually generated by the server based code. The expected result is a simple number, state variable or message. The position, font, font size and 57 other attributes of the field it appears in are completely irrelevant to the test case. The automated tool is, in effect, instructed by the framework tool to ignore these things, and focus on the tester's predicted result.
It's interesting (to me anyway) that the paper that is most downloaded from the Gerrard Consulting website is my paper on GUI testing. gerrardconsulting.com/GUI/TestGui.html It was written in 1997. It gets downloaded between 150 and 250 times a month approximately. Why is that for heavens sake – it's nine years old! The web isn't even mentioned in the paper! I can only think people are obsessed with the GUI and haven't got a good understanding of how you 'divide and conquer' the complex task of testing a GUI into simpler tasks. Some that can be automated beneath the GUI, some that can be automated using tools other than GUI test running tools, some that can be automated using GUI test running tools and some that just can't be automated. I'm sure most folk with tools are struggling to meet higher than realistic expectations.
So, what we are left with is an extremely complex product (the browser), being tested by a comparably (probably more) complex product, being controlled by another complex product to make the creation, execution and evaluation of tests of (mainly) server-based software an easy task. Although it isn't of course. Frameworks work best with pretty standard websites or GUI apps with standard technologies. Once you go off the beaten track, the browser vendor, the GUI tool vendor and the framework vendor all need to work hard to make their tools compatible. But all must stick to the HTTP 2.0 protocol which is 10 years(?) old. How many projects set themselves up to use bleeding edge technology, and screw the testers as a consequence? Most, I think.
So. There we have it. If you are fool enough to spend £4-5,000 per seat on a GUI tool. You then to be smart enough to spend another £2,000 or so on a Framework (PER USER).
Consider the alternative.
Suppose you knew a little about HTML/HTTP etc. Suppose you had a tool that allowed you to get web pages, interpret the HTML, insert values to fields, submit the form, execute the server based form handler, receive the generated form, validate the form in terms of new field values, save copies of the received forms on your PC, compare those forms with previously received forms, and deal with the vagaries of secure HTTPS, and ignore the complexities of the user interface. The tool could have a simple script language, based on keywords/commands, stored in CSV files, managed by Excel.
If the tool could scan a form, not yet tested, and generate the script code to set the values for each field in the form, you'd have a basic but effective script capture facility. Cut and paste into your CSV file and you have a pretty effective tool. Capture the form map (not gui – you don't need all that complexity, of course) and use the code to drive new test transactions.
That's all pretty easy. The tool I've built does 75-80% of this now. My old Systeme Evolutif wesite (including online training) had 17,000 pages, with around 100 Active Server Pages script files. As far as I know, there's nothing the tool cannot test in those 17,000 pages. Of course most are relatively simple. But they are only simple in that they use a single technology. There's thousands of lines of server-base code. If/as/when I created a regression test pack for the site, I can (because the tool is run on the command line) run that test every hour against the live site. (Try doing that with QTP). If there is a single discrepancy in the HTML that is returned, the tool would spot it of course. I don't need to use the GUI to do that. (One has to assume the GUI/browser behaves reliably though).
Beyond that, a regression test based on the GUI appearance would never spot things in the HTML unless you wrote code specifically to do that. Programmers often place data in hidden fields. By definitition, hidden fields never appear on the GUI. GUI tools would never spot a problem – unless you wrote code to validate HTML specifically. Regression tests focus on results generated by server-based code. Requirements specify outcomes that usually do not involve the user interface. In most cases, the user interface is entirely irrelevant to the successful outcome of a functional test. So, a test tool that validates the HTML content is actually better than a GUI tool (please note). By the way, GUI tools don't usually have very good partial matching facilitites. With code-based tools, you can use regular expressions (Regexs). Much better control for the tester then GI tools.
Finally. If you use a tool to validate returned messages/HTML, you can get the programmer to write code that syncs with the test tool. A GUI with testability! For example, the programmer can work with the tester to provide the 'expected result' in hidden fields. Encrypt them if you must. The developer can 'communicate' directly with the tester. This is impossible if you focus on the GUI. It's really quite hard to pass technical messages in the GUI without the user being aware.
So. A tool that drives server-based code is more useful (to programmers in particular because you don't have the unnecessary complexities of the GUI). They work directly on the functionality to be tested – the server based code. They are simpler to use. They are faster (there's no browser/GUI and test tool in the way). They are free. AND they are more effective in many (more than 50%?) cases.
Where such a tool COULD be used effectively, who in their right mind would choose to spend £6,000-7,000 per tester on LESS EFFECTIVE products?'
Oh, and did I say, the same tool could test all the web protocols MAIL, FTP etc. and could easily be enhanced to cover web sevices (SOAP, WSGI blah blah etc.) – the next big thing – but actually services WITHOUT a user interface! Please don't get me wrong, I'm definitely not saying that GUI automation is a waste of time!'
In anything but really simple environments, you have to do GUI automation to achieve coverage (whatever that means) of an application. However, there are aspects of the underlying functionality that can be tested beneath the GUI and sometimes it can be more effective do do that but only IF there aren't complicated technical issues in the way (that would be hidden behind the GUI and the GUI tool ignores them).
What's missing in all this is a general method that guides testers to using manual, automation above or below the GUI. Have you ever seen anything like that? One of the main reasons people get into trouble with automation is because they have too high expectations and are overambitious. It's the old 80/20 rule. 20% of functionality dominates the testing (but could be automated). Too often, people try and automate everything. Then 80% of the automation effort goes on fixing the tool to run tests of the least important 20% of tests. Or something like that. You know what I mean.
The beauty of frameworks is they hide the automation implementation details from the tester. Wouldn't it be nice if the framework SELECTED the optimum automation method as well? I guess this should depend on the objective of a test. If the test objective doesn't require use of the GUI – don't use the GUI tool! Current frameworks have 'modes' based on the interfaces to the tools. Either they do GUI stuff, or they do Webservices stuff or... But a framework ought to be able to deal with gui, under the gui, web services, command-line stuff etc. etc. Just a thought.
I feel a paper coming on. Maybe I should update the 1997 article I wrote!
Thanks for your patience and trigging some thoughts. Writing the email was an interesting way to spend a couple hours, sat in a dreary hotel room/pub.
Posted by Paul Gerrard on July 4, 2006 03:08 PM
Comments
Good points. In my experience the GUI does change more often the the underlying API.
But often, using the ability of LR to record transactions is still quicker than hoping I've reverse-engineered the API correctly. More than once I've had to do it without any help fom developers or architects. ;–)
Chris http://amateureconblog.blogspot.com/
Paul responds:
Thanks for that. I'm interested to hear you mention LR (I assume you mean Load Runner). Load Runner can obviously be used as an under the bonnet test tool. And quite effective it is too. But one of the reasons for going under the bonnet is to make life simpler, and as a consequence, a LOT cheaper.
There are plenty of free tools (and scripting languages with neat features) that can be perhaps just as effective as LR in executing basic transactions – and that's the point. Why pay for incredibly sophisticated tools that compensate for each other, when a free simple tool can give you 60, 70, 80% of what you need as a functional tester?
Now LR provides the facilitites, but I wouldn't recommend LR as a cheap tool! What's the going rate for an LR license nowadays? $20k, $30k?
Thanks. Paul.
Tags: #testautomation #gui #regressiontesting
Paul Gerrard
My linkedin profile is here
My Mastodon Account