Derk-Jan de Grood works for Valori as an Agile transformation coach and a test manager. He wrote several successful books on software development and testing. He likes to think about trends in IT. On his blog, he shares his knowledge and experience for everyone to benefit. He drinks his coffee black. Rik Marselis works for Sogeti as a lead management consultant in the field of digital assurance and testing. He’s an author of several books. The latest one, entitled “Testing in the digital age: AI makes the difference”, deals with the topics of testing of AI, as well as testing with AI. Rik prefers green tea, no milk, no sugar.

23 May 2019

Artificial intelligence is a popular topic these days. As companies are including it in their solutions and services, IT professionals are more likely to encounter AI and machine learning in their daily work and experience the challenges it brings. Derk-Jan de Grood and Rik Marselis wanted to take a deep dive in the topic and organized a whiteboard session to discuss AI and testing.

Standing at the coffee machine, we start our conversation. “The idea of artificial intelligence is by no means new,” Rik explains. “Alan Turing already talked about it in the 1950s. But recently, computers have gotten so powerful that it can actually be applied.” Derk-Jan agrees: “That explains why we see it all around us. Sugar, milk, Rik?” We sit down and decide to focus on how AI impacts the way we do our testing.

Testing consists of a myriad of different activities. The first that comes to mind is the so-called test execution. Test cases are executed to challenge an information system and the output is validated to determine whether the system works as expected. AI can automatically scan the system and run some tests – Rik calls this “testing with AI”. For example, it can use image recognition to identify buttons and links in a user interface and then click them to see what happens. These types of tools (like many AI-based tools) are still in their infancy, but we’re already seeing a few companies and universities successfully applying them.

When we’re testing systems that have machine learning in them, we have another application of AI – Rik calls this “testing of AI”. Smart systems change their behavior as they learn. Testers who are checking them against predefined behavior will find that subsequent tests will yield different answers. Already, organizations are applying AI to interpret their systems’ response. Once the interpreter is fed with possible outcomes, it can be used to decide whether or not a test has been passed.

From reactive to proactive

AI really gets a chance to shine when a lot of data is involved. It can be used to find patterns in the production data. By analyzing the frequency that functions are used, behavioral patterns can be uncovered. Thus, companies can learn how their systems are being used. From the uncovered patterns, AI is able to generate the ideal test set, with a test case for each occurring pattern. The advantage to this approach is that the resulting test set can be made to be representative, but no longer privacy sensitive. In addition, such a generated set, with synthetic data, is also much more compact than when a copy of production data is used (as was customary before everybody had to comply with the GDPR).

 advertorial 

Live Q&A with ASML at the online Software-Centric Systems Conference

Next week, on 5 November 2020, the Software-Centric Systems Conference will take place as a virtual event. Wei Li and Niek Verbeek from ASML will end the day with a live Q&A about “Leaving a legacy: the software shift in ASML’s hardware portfolio.” With your online ticket, you will have access to the livestream and the on-demand videos. Register now!

“Tools that generate test cases have already been around for years,” Derk-Jan states. “Just think about model-based testing tools. We can use the model to generate tests we want to execute. I can imagine that AI can be used to decide what tests to execute and to determine the coverage for various parts of the system.”

Rik starts smiling. “Yes, with AI, we can take this much further. Testers often carry out a large number of tests that are not all equally relevant in a later stage. For a regression test, a selection has to be made. Manually, this is almost impossible, while an AI algorithm – with the right training and parameters – very easily reduces the largest test set to a meaningful size. In order to achieve this, the AI can use the production data and information about, eg, the last time a specific test was executed. Other factors that can influence the test selection are the history of the module under test, the results of the last test run and the number of bugs found with similar cases. When state transition testing leads to many issues in another module, for example, the AI might assign more weight to these types of tests in the selection.”

“From what we say, I understand that we will soon be feeding the AI with historical test results,” Derk-Jan concludes. “Testing provides a lot of information about the system’s quality: what are its weaker places, what are the weak processes and in which areas is it good? All knowledge gained during testing. This will lead to powerful insights, will it not, Rik?”

“What you’re referring to is predictive quality assurance,” Rik explains. “With AI, we see a shift from reactive testing, where tests are being done after the product is made, towards proactive testing. With proactive testing, we can select and execute tests while the product is made, and we can analyze the results on the fly. The next step is prediction. Based on historical data collected from testing in the test environment and monitoring in the production environment, the AI will predict issues to be expected after product release.”

By combining information from both testing (before going live) and monitoring (in production), the AI can make predictions about quality. When it predicts a decrease because there’s a risk of certain defects, the DevOps engineers can resolve those errors before a single user notices it. That’s the ultimate predictive maintenance in IT.

“Unfortunately,” Rik states with a frown, “we’re not there yet. However, the first step has been taken with ‘smart dashboarding’, in which the information from testing and monitoring is automatically interpreted and presented. This way, all stakeholders have a lot of real-time information they can use to make decisions.”

Bright future

Derk-Jan decides to grab another coffee and Rik takes another tea. While we walk back to the meeting room, we wonder where AI will bring us. An interesting application is root cause analysis. We haven’t yet seen any implementation of this, but in the future, we’ll use AI to analyze errors that have been found: based on patterns, it will be possible to identify their type, suggest their origin and give recommendations on how to solve them or avoid them. Once we’ve taken that step, we’re on the road to self-healing software. But that’s another cup of tea, we conclude.

“So far, we’ve discussed AI applications that were merely algorithms. We all like physical devices such as robots. What do you think, Rik,” Derk-Jan wonders, “will we also start using robots for test work?”

“Yes, we will,” Rik projects. “Information technology is already present in all the devices and machines around us. This makes testing the IT component very important. The collaboration between the IT and the machine needs to be tested as well, though. This often requires physical actions, such as pressing buttons, operating control levers or moving parts – with an exoskeleton, for example. You can, of course, have people do this kind of physical testing, but if you want to make the tests repeatable and if you want to have the option to continue testing 24/7, robots are the way to go. We’ve already seen the first applications of robot arms as a test tool, performing physical touch actions on screens of mobile devices. Also, in the aircraft industry, industrial robot arms are used for regression testing cockpits after a new software version has been installed.”

We finish our drinks and conclude that AI impacts testing a lot. There’s a bright future for testers that are willing to learn new stuff. Boring work (often referred to as “checking”) is taken over by artificial intelligence, and human testers have their hands free to use their experience, intuition and creativity to explore whether an information system is really suitable for the intended use.

Before we’re there, however, we’ll need to select and measure a lot of data to feed to the learning machine. This will require a breed of testers who understand these new ways of testing and who can both implement AI solutions and tell a convincing story in order to get organizations to invest in them.

Edited by Nieke Roos