Manual and automated testing should be considered to be complementary, argues Andrew Issaenko.
The paradigm “testing with the button press” is the hidden motto that defined, defines and will probably continue to define the philosophy of software test automation for many years ahead. When the button is pressed, tests are run, and issues are uncovered in the new build of the software. What can be simpler, right?
However, this paradigm is probably the major reason why there are very few examples of truly cost-effective test automation implementations. Automated test cases can be run with smiling faces until changes in UI or even the logic of underlying business processes occur, necessitating adjustments in test scripts and highlighting maintainability issues. Those adjustments can be done relatively smoothly if there are abstraction layers for encapsulating IDs of UI elements, steps, actions or even entire business processes.
Defining a good architecture for optimal maintainability of the test automation can be tricky, as nobody knows what will change in the future in software under test and how. Abstraction and encapsulation are the best friends of test automation, but they can also become the worst enemies if they’re unbalanced and the structure of layers isn’t well thought out. In extreme cases, a small change in the very base layer required by one test may result in cascade failures within other test automation scripts.
The solution here is to stop looking at automated and manual testing as two separate worlds and instead explore how they can benefit from each other. When executing manual tests, a tester usually needs to perform many routine operations, such as creating/purging dedicated test data records in the database, setting up configuration, filling required data in forms and so on. Automating time-consuming and repetitive routine actions by creating automation test utils may benefit manual test execution greatly.
Using those test utils on a regular base will mean that they evolve and are adjusted organically in response to the software changes. Looking for ways to get the most from their usage will also help uncover potential for improving the entire test design and a way of test execution. In classical test automation, optimizing supporting functions is limited by a potential impact on already existing tests. However, using utils for manual testing has no such impact at all, so they can be adjusted and iterated quicker for better support of test execution.
For example, instead of preparing data records for each test independently, it can be sensible to combine preparations and run the util only once by specifying the parameters of required data records in the test data file. And because there’s no physical effort to create records, the variety of underlying data for tests can be expanded and enriched, so more scenarios would be covered by manual tests. Well, “manual” isn’t the correct term anymore because it becomes hybrid or automation-assisted manual testing.
In a classical concept of test automation, there’s a huge distance between exploratory testing and test automation, as the latter requires a predefined input and knowing the expected output. However, nothing can make manual exploratory testing more productive and even more exploratory than automating routine and time-consuming steps, leaving the tester more time to explore what matters and the opportunity to become an exploratory test data/test patterns analyst as the entire exploratory testing moves to a new realm.
That happened to me when I executed exploratory tests on TV recommendation software. Expanding variations and coverage of exploratory tests unveiled critical testing issues in the early stages and enabled the crystallization of functional requirements based on ideas and Scrum stories. And yes, I eventually ended up creating ‘fixed’ test automation by simply implementing the most tricky and interesting test scenarios and situations in simple test scripts. The architecture of the test automation framework and its layers emerged earlier during the design of the dedicated test automating utils. As a result, the formal test automation was implemented quickly when the software was stable because its development started early and benefited greatly from manual testing along the way.