Paul Gerrard

My experiences, opinions in the Test Engineering business. I am republishing/rewriting old blogs from time to time.

First published 05/11/2009

Some foundation/visionary work to define ERP specific test methods and tools has already been performed by the author. However, the approach needs more research, rigor and proving in a commercial environment. Academic and commercial partners are sought to refine, develop and prove these methods and tools. An overview of the value of reporting test progress with reference to risk.

Registered users can download the paper from the link below. If you aren't registered, you can register here.

Tags: #risk-basedtesting #sap #erp

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account

First published 06/11/2009

Getting the requirements ‘right’ for a system is a pre-requisite for a successful software development, but getting requirements right is also one of the most difficult things to achieve. There are many difficulties to overcome in articulating, documenting and validating requirements for computer systems. Inspections, walkthroughs and Prototyping are the techniques most often used to test or refine requirements. However, in many circumstances, formal inspections are viewed as too expensive, walkthroughs as ineffective and Prototyping as too haphazard and uncontrolled to be relied on.

Users may not have a clear idea of what they want, and are unable to express requirements in a rational, systematic way to analysts. Analysts may not have a good grasp of the business issues (which will strongly influence the final acceptance of the system) and tend to concentrate on issues relevant to the designers of the system instead. Users are asked to review and accept requirements documents as the basis for development and final acceptance, but they are often unable to relate the requirements to the system they actually envisage. As a consequence, it is usually a leap of faith for the users when they sign off a requirements document.

This paper presents a method for decomposing requirements into system behaviours which can be packaged for use in inspections, walkthroughs and requirements animations. Although not a formal method, it is suggested that by putting some formality into the packaging of requirements, the cost of formal inspections can be reduced, effective walkthroughs can be conducted and inexpensive animations of requirements can be developed.

Registered users can download the paper from the link below. If you aren't registered, you can register here.

Tags: #testingrequirements

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account

First published 06/11/2009

This paper, written by Paul Gerrard was presented to the EuroSTAR Conference in Edinburgh, November 1997. This is the most popular and downloaded paper on this site, which tells you that perhaps the art of GUI testing hasn't moved on that much in the last ten years?

Download Testing GUI Applications.

Tags: #gui #guiapplications

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account

First published 05/05/2011

It’s our experience that a test approach should to be tailored to meet the specific needs of an organisation and/or a project.

There is no 'one-size fits all'.

Testing needs to take into account organisational culture, people, suppliers and method of working which together combine to make a unique situation. No two organisations are exactly the same. For a project, testing has to account for stakeholder goals, business objectives, technical environment and development approach being deployed. No two projects, even for the same organisation, are the same. We apply our extensive knowledge and experience along with industry good practice to define:

  • An organisation wide testing framework to be used as a model for your individual projects, taking account of both waterfall and agile methods
  • A project specific testing approach that meets your quality goals and takes account of your budget and timescale demands

We will deliver a testing approach that is ‘tailored fit’ to meet your needs. If you’d like to know more, please contact us directly or buy the Testers Pocketbook.

*** If you are interested in an on-site workshop, see our 'Test Strategy in a Day' workshop ***



Tags: #TestApproach #TestStrategy

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account

First published 25/11/2009

Ref Michael B's testing v checking interesting but...

comparison with scripted – only talks about execution

thought process leading to scripts – similar toexploration and more productive?

scripted: super-clow motion

load our expextaction bullet – kerching

before we pull the trigger – are explorers at a disadvantage?

benefits of planned scripted

is MB comparing expert explorer with dumb test runer – i.e. a tool?

could we get a valid comparison of scripted v exploratory?

Tags: #ALF

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account

First published 06/11/2009

A discussion of the main improvement types possible and how the TOM maturity model can be used to get a process improvement initiative started.

Registered users can download the paper from the link below. If you aren't registered, you can register here.

Tags: #softwaresuccessimprovement

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account

First published 30/03/2007

The raw materials of real engineering: steel, concrete, water, air, soil, electomagnetic waves, electricity, obey the laws of physics.

Software of course, does not. Engineering is primarily about meeting trivial functional requirements and complex technical requirements using materials that obey the laws of physics.

I was asked recently whether the definitions – Functional and Non-Functional – are useful.

My conclusion was – at the least, they aren't helpful. At worst debilitating. There's probably half a dozen other themes in the initial statement but I'll stick to this one.

There is a simple way of looking at F v NF requirements. FRs define what the system must do. NFRs define HOW that system delivers that functionality. e.g. is it secure, responsive, usable, etc.

To call anything 'not something else' can never be intuitively correct I would suggest if you need that definition to understand the nature of the concept in hand. Its a different dimension, perhaps. Non-functional means – not working doesn't it?

Imagine calling something long, “not heavy”. It's the same idea and it's not helpful. It's not heavy because you are describing a different attribute.

So, to understand the nature of Non-Functional Requirements, it's generally easier to call them technical requirements and have done with it.

Some TRs, are functional of course, and that's another confusion. Access control to data and function is a what, not a how. Security vulnerabilities are, in effect functional defects. The system does something we would rather it didn't. Pen testing is functional testing. Security invulnerability is a functional requirement – it's just that most folk are overcome by the potential variety of threats. Pen tests use a lot of automation using specialised tools. But they are specialised, non non functional.

These are functional requirements just like the stuff the users actually want. Installability, documentation, procedure, maintainability are ALL functional requirements and functional tested.

The other confusion is that functional behaviour is Boolean. It works or it doesn't work. Of course, you can count the number of trues and falses. But that is meaningless. 875 out of 1000 test conditions pass. It could be expressed as a percentage, what exactly does that mean? Not much, until you look into the detail of the requirements themselves. One single condition could be several orders of magnitude more important than another. Apples and oranges? Forget it. Grapes and vineyards!

Technical behaviour is usually measurable on a linear scale. Performance and reliability for example (if you have enough empirical data to be significant) are measured numerically. (OK you can say meets v doesn't meet requrements is a Boolean but you know what I mean).

Which brings me to the point.

In proper engineering, say civil/structural... (And betraying a prejudice, structural is engineering, civil includes all sorts of stuff that isn't...)

In structural engineering, for example, the Functional requirements are very straight forward. With a bridge – say the Forth Bridge or the Golden Gate – built a long long time ago – the Functional requirements are trivial. “Support two railway lines/four lanes of traffic with travelling in both directions. (And a foot bridge for maintenance)”.

The Technical requirements are much more complex. 100% of the engineering discipline is focused on techical requirements. Masses of steel, cross sections, moments, stresses and strains. Everything is underpinned by the science of materials (which are extensively tested in laboratories, with safety factors applied), and tabulated in blue or green books full of tabulated cross sectional areas, beam lengths, cement/water ratios and so on. All these properties are calculated based on thousands of laboratory experiements, with statistical technques applied to come up with factors of safety. Most dams, for example, are not 100% safe for all time. they are typically designed to withstand 1 in 200 year floods. And they fail safely, because one guy in the design office is asked to explore the consequences of failure – which in the main are predictable.

Software does not obey the laws of physics.

Software development is primarily about meeting immensely complex functional requirements and relatively simple technical requirements using some ethereal stuff called software that very definitely does not obey laws at all. (Name one? Please?)

Functional testing is easy, meeting functional requirements is not. Technical testing is also easy, meeting technical requirements is (comparatively) easy.

This post isn't about “non-functional requirements versus functional rerqurements.” It's an argument saying ALL requirements are hard to articulate and meet. So there.

Tags: #ALF

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account

First published 13/10/2010

It seems like Prezi is all the rage. As a frequent presenter, I thought I'd have a play. So I took some of the early text from the Tester's Pocketbook and created my first Prezi. Not half bad. I'm not sure it's a revolution, but sometimes, anything is better than Powerpoint.

 



Tags: #testaxioms #Prezi

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account

First published 05/11/2009

we're stood in a boat ankle deep in water. the cannibals are coming to kill and eat us. The testers are looking for the holes in the boat saying – we can't push off yet the river is full of hungry crocodiles.

The testers are saying – if we push off now, we'll all die.

The skipper is saying – if we don't push off soon, we'll all die.

I'ts the same with software.

Tags: #ALF

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account

First published 25/11/2011

Some time ago, Tim Cuthbertson blogged “how I Replaced Cucumber With 65 Lines of Python.” I recently commented on the post and I've expanded on those comments a little here.

I share Tim's frustrations with Cucumber. I think the reuse aspect of the step definitions has some value, but that value is limited. I've heard of several sites having literally thousands of feature files and step definitions and no way to manage them systematically. A bit of a nightmare perhaps.

To address the 'specification by example isn't enough' challenge – SBE isn't enough and I demonstrate/discuss that here. Although some trivial requirements can be SBE – most can't, so you need a separate requirement statement to supplement scenarios/examples to fully describe the requirement.

This doesn't sound very Agile, but I'm not talking Agile here necessarily. I understand that some teams can live with minimalist stories and the spec is the code. I'm talking about teams that require an accurate definition of the requirement and want to drive the creation of tests from stories and scenarios. This need could apply to all project styles and not just Agile.

Gojko Adzic talks about the need for 'Key Examples' in his Specification by Example book. When I spoke to Gojko not too long ago and suggested more specification content beyond examples was usually required – he agreed. If this is true, that doesn't mean that we need bloated requirements documents. The level of detail in a requirement (as captured by a BA) can be quite compact, because the precision of a business rule doesn't need heavy explanation – the scenarios and tabulated examples (if needed) do that for us.

Successful execution of 'key examples' are a necessary but not usually sufficient acceptance criteria. Developers definitely need more tests to cover edge cases, for example. (User) acceptance requires end to end tests and probably combinations of examples in sequence to fully satisfy the business users. (Although these types of tests are likely to be manual rather than automated).

Some time ago, we wrote (a small amount of) code to generate Python unitttest code directly from stories, scenarios and example tables and it works fine. (All we need are different language templates to generate xunit code in other programming languages). The test code may be xunit format – but the story/scenarios define the content of the tests. xUnit code could drive any style of test in theory. We're also experimenting with generating Robot Framework code and HTML Fitnesse tables directly. All seems feasible to me and in principle, all that's required is a template to generate the correctly formatted output. Additional setup/teardown code and fixtures are held in pre-existing code.

Our SaaS product SP.QA can be used to capture requirements and the range of stories/scenarios that example them. Since the requirements and stories are managed in a database and the test code is generated, the developers (or testers) only need to manage a single class or function for each story to implement the tests.

This has the distinct advantage that BAs can treat epic stories/sagas as requirements and build their hierarchical view. Stories that identify features and scenarios that example them can be refined over time. When they are 'trusted' the test code can be generated.

We're offering a commercial product, but I think even in the open source domain, the days of plain text story writing are numbered. We think all requirements tools will move in this direction. In the future, requirements tools will capture requirements and stories/scenarios that example those requirements so that they can be used to drive requirements reviews, to be a starting point for acceptance tests and be used to generate test code for developers and system testers.

When business analysts and users get their hands on these tools, then BDD will really take off.

Tags: #BDD #Python #Cucumber #xUnit #Fitnesse #RobotTestFramework

Paul Gerrard Please connect and contact me using my linkedin profile. My Mastodon Account