Software bugs are far more relevant to costs than hardware errors. In fact, software failures account for more downtime costs than hardware failures by a factor of 3. Yet many organizations spend little effort and money to ensure software quality. Even for those companies that do extended testing to inspect their code, the effort is so complex that bugs are still inevitable. In fact, companies that do not spend the time and money upfront to correct bugs end up paying for it in downtime and corrective efforts after the application is released. In the worst case, it can cause a loss of customers or revenue. In this article we will discuss the Rule of Ten and how it applies to a quality application.
The Rule of Ten states that the further a bug moves undiscovered into the late stages of a development process - or even to the end-customer - the higher the costs for eliminating it.
The rule is well-founded by the results of several studies from the 1970s in Japan, the USA, and Great Britain, which dealt with the causes of product and quality defects. All these analyses delivered almost the same results: 70% of all product defects were caused through failure during the stages of planning, design, or preparation. Even though the studies focused on manufacturing processes, the consequences can be found in modern software development as well. If it takes 100€ to fix a defect at unit testing, it takes 1,000€ at system testing, 10,000€ at Acceptance Testing, and 100,000€ after release.
The second part of the Rule of Ten states that after each level you will have 10% of the remaining bugs in the application. In other words, you will find and correct 90% of bugs in the development stage and have 10% remaining bugs going into system testing. In system testing you will find and correct 90% of the remaining bugs, with 10% of those bugs going into acceptance testing. Therefore, after system testing you should have found and corrected 99% of the total project bugs. In acceptance testing you should again find and correct 90% of the remaining 1% of bugs. By the time you go into production you should have 99.9% clean code.
If your software testing process has a 90% detection rate then only 0.1% of all bugs will make it into production. However, if your quality assurance program has less than a 90% detection rate then the drop will drastically increase your development costs. Just dropping the detection rate to 80% will multiply your costs. Also the reduction of the detection rate will increase the number of bugs that make it into production, with a corresponding increase in follow-up costs and reputation loss. Instead of a 99.9% clean application, the application will be 99.2% clean with 0.8% of bugs. The only stage where it is cheaper to fix bugs than in testing with a 90% detection rate is directly in the development stage. In the other three downstream levels the costs skyrocket.
If you would like to cut your development costs and increase the quality of your released applications, you should definitely take a look at your software testing process. Considering the skyrocketing costs as well as the possible reputation loss through buggy releases, you should not hesitate to invest in your quality assurance team and software testing tools. As described above, it is particularly cost-effective to detect bugs during the development stage. For this reason, your focus should be on supporting your developers with application testing tools that automatically detect bugs during this stage. DevSecOps methods such as static analysis, dynamic analysis, or modern fuzzing are particularly suitable for this purpose.
If you are interested to improve your application quality and reduce your development costs at the same time, you should take a look at our automated security testing platform, which allows developers to detect bugs as early as possible in the process.
What do you do to prevent follow-up costs from undetected software bugs? Leave us a comment!