Jan Bosch is research center director, professor, consultant and angel investor in start-ups. You can contact him at jan@janbosch.com.

5 September

As I discussed earlier, one of the key approaches to software development is the HoliDev model. It combines requirements-driven development, outcome-driven development (eg A/B testing) and AI-driven development. It may easily seem that for each feature or functionality, one type of development is selected and then used throughout its lifecycle. In practice, however, we see the development approach evolving over its lifetime.

First, product management has determined that a certain function or feature is relevant for the customer. This function or feature is initially specified in a requirement specification and then built by R&D. Even though it may not be clear what the metrics are that the function or feature is looking to improve, it’s reasonable to assume that there are implicit assumptions about the effect of the function or feature on the performance of the system.

After deployment, the system’s instrumentation will show certain changes in the collected metrics and KPIs that can, reasonably, be attributed to the newly added function or feature. Assuming the function or feature proves to be valuable, the next step is to consider how we can improve the metrics and KPIs driven by the function or feature. At this point, the team can decide to switch from requirements-driven to outcome-driven development and start to develop hypotheses and test these through experimental development approaches, such A/B testing or Multi-Armed Bandits (MAB) algorithms. Over the course of several experiments, the metrics and KPIs can be optimized by testing the alternatives in operation.

Once the return on investment of A/B testing starts to decline, the team can decide to move to the next stage of using AI techniques, such as automated experimentation employing machine- and deep-learning (ML/DL) approaches, to drive further improvement. The good news is that, at this point, lots of data have been collected and A/B testing has generated alternatives with different outcomes that can be used as a basis for training the ML/DL system. The team can then put an ML/DL model in place that uses its training and new data as a basis for fully automated improvement of the function or feature over time.

This describes the process of moving from requirements-driven to outcome-driven to AI-driven development. However, in practice, you don’t have to follow this process. Instead, you can start with outcome-driven or AI-driven development directly and evolve it over time.

Similarly, we can move back from AI-driven and outcome-driven development to requirements-driven development when functionality is commoditizing. When it’s commoditized, there’s no value in driving further improvement and functionality. So, at this stage, the functionality is frozen and no longer being improved upon. However, sometimes there’s a need to make some changes to the commoditized component anyway. For example, a defect is found or a dependency on external software ceases to work correctly. In this case, a requirement to update the commoditized component with the intent of keeping the functionality up to date will enter the system.

The key point is that the type of approach to select for developing or evolving functionality depends on several factors. There’s no fixed recipe for this. Instead, you should carefully evaluate the benefits and disadvantages for each approach, based on the level of differentiation provided by the functionality, the understanding of the intended effect of the functionality in quantitative terms, the amount of data available, the willingness of customers to be exposed to evolving, periodically changing functionality and other factors. Doing so will allow you to significantly increase the effectiveness of your R&D investments as you have quantitative data to steer decision making.