Unit testing is often considered too heavy or too inconvenient for embedded software; hence the practice of test-driven development is sometimes eschewed in this domain. But with the right approach, these embedded limitations can be mitigated and our software quality can be greatly improved, explains Frank Vanbever of Mind-Essensium.
The current hype around the Internet of Things has renewed interest in embedded systems from several industries. It’s important to realize that the IoT domain has its own set of unique requirements that sets it apart from many other kinds of software and even many other embedded devices. One of the most important is upgradability: in an IoT scenario devices are regularly upgraded to add functionality and protect against the latest threats. But we often have just one chance to get it right or we run the risk of bricking the device.
This puts an emphasis on software quality. One technique to improve our confidence in the software being deployed is test-driven development (TDD). While there’s still debate on the topic, many studies claim that TDD leads to fewer defects than other development methodologies.
Test-driven development might seem like a strange concept to many software developers when targeting embedded devices. After all, the machines are constrained, and running unit tests on the target can be very inconvenient. The code we write might also have intricate dependencies on the underlying hardware or software components, making it less easily testable.
While such concerns are valid, there are ways to mitigate them and they should not prevent embedded software developers from adding unit testing and test-driven development to their toolboxes. To see how, let’s first take a step back and discuss what test-driven development is all about. In essence, it’s a development process where we write a unit test before actual implementation. A typical TDD cycle, often called the red-green-refactor cycle, looks like this:
- Write a test for every new feature. This forces the developer to clearly understand the requirements.
- Run the test and watch it fail. This is a sanity check on the negative case for both our test and the environment in which it runs.
- Implement the feature by writing the exact amount of code required to make the test case succeed.
- Run all the tests, including all previous tests. This gives us confidence that we haven’t caused any regressions.
- Refactor the code. Code is often written in what could be described as a stream of consciousness style. The tests give us confidence that our refactoring hasn’t caused any regressions.
- Wash, rinse, repeat: iterate until all the required features are implemented.
Close to the metal
As we can see, we have to execute our test often during development. This can be inconvenient and even problematic on constrained embedded systems. What’s worse: we don’t need to run just a single test, but an entire suite that will grow larger and larger over time, making our problem progressively harder.
The solution to this problem is dual targeting: writing our software in such a way that it will run on the target (after cross-compilation) as well as on our development system. This allows us to run the bulk of the unit tests on the host.
Dual targeting presents some – often subtle – pitfalls. Consider for example the difference in word size between an 8-bit microcontroller and the 64-bit processor in today’s workstations. There might also be a mismatch in endianness between the target and the host. The tools used (libraries, compilers, and so on) might also differ in subtle ways, sometimes causing bugs that are difficult to spot.
However, dual targeting also bears some additional benefits. We can start work even when the target hardware is not available. Targeting two platforms from the get-go is also a good way to gauge our code’s platform independence. Should the need arise to change to another processor model or architecture in the future, the problem shifts from supporting a second processor to supporting ‘yet another processor’.
Some embedded developers are reluctant to start using test-driven development because they feel the code is too close to the metal to be unit tested. Although embedded software does have strong hardware dependencies, the same argument can be made for non-embedded software. The difference is the absence of abstraction layers in the embedded world, favouring direct interaction with the hardware for increased performance.
By designing clean interfaces that hide complexity and have clearly defined behaviour, we can manage hardware dependencies the same way we manage software dependencies. By keeping the layer that is directly dependent on the hardware small, we minimize the amount of code not covered in unit tests. A usual practice is to provide a ‘mock’ or ‘stub’ implementation of the interface for a dependent module. This enables us to write tests that rely on interaction with the outside world.
A stub is an implementation of a function or method that has predetermined behaviour. Consider a function that returns an integer between 0 and 100. A valid stub could just return 28. A mock is an implementation of a function or method whose behaviour is configured at run time. In the example above, we would typically provide an implementation that returns a value that was previously communicated to the mock module through a helper function. Mocks can also help verify test success by allowing us to verify whether an interaction with the module has occurred during the test. Tools such as CMock can help us create mock implementations of entire libraries.
Writing easily testable code requires thinking up front about how we will organize our code. TDD’s test-first approach helps to expose such issues early in the development cycle, reducing the need for large refactoring later on.
For sizeable applications, managing the unit tests can become a cumbersome task. In such cases, a unit testing framework can provide assistance. Thanks to our approach, we can use the normal frameworks that are available for automating unit test execution. Such a framework sets the conditions for a given test, asserts whether the test has succeeded and provides normalized output that can be easily parsed.
Tangentially related to the practice of test-driven development is the concept of continuous integration. The core idea here is that all working developer copies are merged into a shared mainline multiple times per day. When we have a suite of unit tests, we can leverage these to get immediate feedback on the merged changeset.
Unit testing software for embedded systems does present some additional challenges compared to unit testing software for non-embedded targets. Through dual targeting and dependency management, test-driven development can become a successful strategy for developing software that targets embedded systems – a strategy that produces low-defect software suited to the Internet of Things era.