Your cart is currently empty!
Outdated belief #7: Post-deployment is relevant only for (serious) quality issues
A few decades ago, the first reports were published on software errors resulting in financial losses exceeding 1 billion euros. Since then, many more accounts of software errors costing hundreds of millions or more have been in the news. The response in the larger community was twofold. First, test the heck out of every piece of software going out and take the time it needs to achieve that. Second, once a product containing software has left the factory, do everything you can to avoid having to change the software.
The reason to avoid changing software in a shipping product was, again, twofold. First, the cost of validating the software was often very high, not the least because significant human effort was required to conduct all the tests necessary to get to production quality. As most development followed a waterfall-style process, the validation phase typically found numerous errors. The subsequent problem was that, statistically, 25 percent of all the code changes intended to remove errors resulted in new errors being introduced, meaning that the list of issues to fix simply keeps growing if you’re not careful. So, once you’ve got a shippable version of the software with good quality, you want to avoid messing up the code and introducing new defects.
The second reason to avoid updating software post-deployment was that it required either a recall where the product, eg a vehicle, had to be brought to a service station or a service technician had to visit the site where the system was installed, eg a medical scanner. This typically resulted in hundreds of euros of expenses for each instance of the product for every software update. As the business model usually didn’t include a way to get paid, this became a non-recoverable expense for the company that everyone wanted to avoid as much as possible.