Resources

Do your testers have eidetic memory?

Do your testers have eidetic memory?


In this series of blog posts, we’re taking a look at a number of facets of Undo’s Live Recorder. Later, we’ll look at ways of integrating Live Recorder into a workflow, before moving to a deeper level to explore how to get the best possible experience out of Live Recorder, but today we’re looking at another scenario in which Live Recorder can prove useful. Last time we focused on automated testing; this time we’re going to the other end of the testing spectrum: manual testing.


I’d like to take a little time to clarify the terms of reference on this one, because “manual testing” covers a multitude of sins, but also a few virtues. So bear with me while we chip away through the crust and uncover the gems beneath.

Type_Monkey

Not so long ago, in a galaxy not so far away, all software testing was done manually. Back then a “test script” was something testers read from, to follow the steps required to validate the product.  Some younger readers might now be saying to themselves, “OMG, that’s cray cray!!!” but we must remember that we’re talking about a time when applications were relatively simple things, with limited functionality. The computers they ran on were relatively basic, likely not even capable of running a second process to test the first. I’m talking about the days even before Bill Gates allegedly uttered the infamous words, “640K ought to be enough for anybody.”

Also people were more tolerant of software bugs in times of yore - only technically savvy people were using computers, and that demographic is generally more willing to find a workaround rather than just give up on the application altogether, if they believe the application can make their lives better. As a result software just wasn’t as well tested. The number of test cases wasn’t so high that manually iterating through them all was prohibitive, and the software QA process, such as it was, relied on ad hoc testing for a higher proportion of its totality. I further suspect that testers were viewed as a relatively cheap resource. They were hired to do a mind-numbingly repetitive job and paid accordingly.


“Automated testing was born””

As software became more complex, computers more powerful, and OSes more sophisticated, people realised that the computer could be made to do much of the tedious testing grunt work. Organisations started cobbling together infrastructure to run scripts containing a sequence of shell commands and textual descriptions of tests. Automated testing was born.

Initially, automated testing was the preserve of the non-graphical. Backend servers benefited greatly, and the internals of GUI-frontended applications could be, if not thoroughly exercised then at least taken for a walk, if there was also a command-line interface. The rise of modular programming also allowed for unit, or component, testing; thus you could be somewhat assured that the pieces behind the GUI behaved as stated on their tins. The world of testing grew to require a subspecies of tester that could also write code to a degree...but that was okay because there was plenty of interface testing left to go around those who couldn’t, and nobody needed to go on strike.

Eventually, the interface testers saw the green, green grass in their automated tester neighbour's garden and GUI-testing frameworks came onto the scene.

Which brings us pretty much up to date. From this point onwards, I’m no longer talking about the kind of repetitive manual testing that can and should be replaced by automated testing as described above. I’m talking instead about that which remains; the ad hoc testing of the final product for which the human factor cannot (yet!) be replaced.

Monkey_User_Reproducer

Graphic used with consent from the cool dudes at MonkeyUser

One of the first questions asked after a manual tester encounters a bug is, “What did you do?” For a relatively straightforward bug with a simple chain between cause and effect, maybe the tester can weave a yarn with sufficient verisimilitude that a developer can reproduce. Bearing in mind most of the “simple” bugs have hopefully been caught by automated testing, there is likely a more tangled web of interacting effects. For the tester, to whom the inner workings of the application are hidden mysteries, spotting the connections is impossible. Testers can find themselves looking over their own shoulders as they work, in a bid to capture their every action just in case it proves relevant to some issue they will discover in the future.

I discussed in my previous post how Live Recorder can help with diagnosing sporadic automated test failures.  In the manual testing scenario, things are even worse. It isn’t simply the case that the failure is subject to subtle variations in timing and/or environment; often we’re not even sure of the steps to reproduce under any circumstances. Without Live Recorder, the best you can do may be to place some extra speculative diagnostics and see if the issue recurs. With Live Recorder, if you record the application under test throughout manual testing, the tester need only save the recording at the point of detected failure. A developer can diagnose the fault from the recording without needing to work out what the tester did, and which bits were actually relevant.

In addition to giving developers a fighting chance to diagnose issues thrown up by manual testing, Live Recorder frees up manual testers’ mental capacity for doing what they do best.