Jan Bosch foto serie 1000×56311

Jan Bosch is a research center director, professor, consultant and angel investor in startups. You can contact him at jan@janbosch.com.

28 June

There are few topics in software development that can get people going as much as testing and quality assurance. The notion of shipping low-quality code to customers feels like a humiliation to most engineers. It causes a perception of abusing the customer for testing purposes, which doesn’t sit right and to many feels like a way to get disrupted really fast.

Although everyone agrees that software needs to be tested and be of sufficient quality before shipping to customers, the discussion is often concerned with how to achieve that desired state. Interestingly enough, there still is a significant group of people who believe that manual testing at the end of the development process is superior to anything else, including automated and continuous integration and test.

Speed, data and ecosystems

In his course “Speed, data and ecosystems,” Jan Bosch provides you with a holistic framework that offers strategic guidance into how you successfully can identify and address the key challenges to excel in a software-driven world.

My observation is that this opinion is based on, at least, three misconceptions. First, many seem to think in waterfall concepts where requirements, design, development, testing and release are performed in sequence. Therefore, it doesn’t make sense to test before you’re done with development nor does it matter that the code quality drops during development as testing will happen afterward anyway.

Second, there’s the assumption that a manual test where a human observes the outcome is superior to automated testing. As we only have to test once, it doesn’t matter that it takes time and human effort. For some of the people I talked to, the dashboard showing successful completion of various automated test suites didn’t say anything about the quality of the system.

 advertorial 

The crisis in the semiconductor market – symptoms, diagnosis, forecasts

You must have heard about it already, that the car manufacturers are unable to produce new cars due to the crisis in the semiconductor market. Modern cars are full of electronics but is it really bad enough to halt production? Does the semiconductor crisis only affect the automotive industry and is there no cure or at least a vaccine for it? Read about it here.

Third, many consider testing and quality assurance to end when the software is deployed. The basic assumption is that once we’ve performed our testing, the software can be trusted and there’s no need to track its performance out in the field. And if, against all odds, some issue is reported by a customer, this is viewed as a failure of the quality assurance activities.

For those of you following my posts, it comes as no surprise that I strongly believe in the power of automation of everything we can around software development and deployment. One of the most powerful drivers to achieve that is to “do it often,” which of course is exactly what DevOps and other continuous practices are all about.

The continuous approaches allow for building up experiences and expertise that enable us to standardize and codify how we do things. Once we do that, we can use that knowledge to automate what we used to do manually. This obviously applies to testing, but also to many other activities around software development as well as, with robotic process automation (RPA) and AI, many other processes and activities within the organization.

Some years ago, we developed the continuous integration visualization technique (CIVIT) where we visualized all testing activities in an organization. One of the key takeaways is that for those companies using DevOps-style approaches, testing continues post-deployment through running test cases after the software is deployed at the customer and by observing the functioning of the software through instrumentation and data collection. This might be viewed as using the customer for testing, but in practice, the number of deployment contexts far exceeds the ability of most companies to build realistic testbeds. It’s simply prohibitively expensive to achieve full coverage and more effective to test post-deployment, identify remaining issues and, preferably, fix these before the customer even realizes there’s a problem.

Software development and deployment is a continuous activity where as many tasks as possible should be automated. The activities don’t happen sequentially but in parallel and we need automation to keep things aligned. However, our research does show that it’s beneficial, especially for complex systems, to complement automated testing with some manual, exploratory testing. This testing is especially concerned with complex, end-to-end use cases and less with defects in the traditional sense but more with inconsistencies and misalignments in the user experience.

As most companies I work with have already realized, modern software development is about “doing it often” and automating what we can. That includes testing as it’s obvious that testing early and often is better than testing late and only once. Reams of paper have been filled with best practices concerning testing and quality assurance and I’ve ignored most of them in this post. However, no matter if a system is expected to be highly secure and safe or not, encoding requirements in automated test cases is still better than conducting tests manually. Certification institutes have to catch up to this reality as well. Systems that get safer and more secure continuously are preferable over static and stale ones.