Menu

Rule of Ten Part 2: Where Do the Costs Come From?

October 23 2020 | 3 min

The Rule of Ten states, that with each phase of a software development process a bug passes through undetected, the costs of fixing it increase by a factor of ten. A good bug detection rate is a key factor, as it determines how many bugs make it to late stages undetected. The rule explains why bug fixing eats up resources and emphasizes the importance of accurate bug detection - especially in the early stages! If you are unfamiliar with the Rule of Ten, I highly recommend reading up on our recent article on this topic.  

After releasing this article, we received a lot of feedback. While it was clear, that early testing saves resources, what stood out to many of our readers, was how large these savings actually are. Many of them only really understood the full scope of these sums in the discussion that followed the article. Since this is a topic that we constantly discuss with our partners and customers, we were not surprised by this reaction. To clear up the uncertainty for all of our readers, we want to utilize this article to break these big sums down and have a closer look into how they come about.

Rule of Ten

For a step-by-step example of how exactly bug-fixing costs add up, click here.

Late Software Testing

Let’s take a typical Professional Services Automation (PSA) and Quality Assurance (Q&A) process, as it still exists in many companies that practice “Agile”, “DevOps” or even DevSecOps. For this example, let’s assume that within a big corporate project, an application is built over the course of two years and is only then handed over to the product security manager (PSM). 

The PSM then defines requirements and initiates a manual testing process. First penetration testing is conducted with OWASP Zap or commercial software. These efforts need to be documented and reviewed which is already time-consuming. Most of the vulnerabilities found during this phase are trivial and could have been avoided if testing was done earlier or automated. The pentester then has to tune the tooling, evaluate code coverage and results, tune again, rank the vulnerabilities and write reports, before the results can be fed back to the PSM. The PSM then evaluates the results and consults the team to find out if someone has a quick fix at hand. In most cases however, this is a dead end. In the next step, the PSM discusses the bugs together with the pentester and the project manager. Then he passes the vulnerabilities on to the product manager (PM), who after several feedback loops, finally defines which bugs are critical. The PM then assigns them to the different “Agile” teams.

The “Agile” teams then do their best to trace the bugs back to their roots in order to fix them. This is easier said than done, since in most cases, more than a year has passed since the buggy code was written. Before they can actually start fixing the bugs, programmers have to first get an understanding, not just of their own mess, but often also of the mess of their colleagues who have since switched teams or left the company. It goes without saying, that this further delays the release. In some cases, simply fixing the bug is not enough, as the bug has caused consequential errors which also require closer inspection. In other cases, changes in the architecture code are necessary, taking up more manual effort and causing further delays. 

During the next step, additional pentests are conducted. If more vulnerabilities are found within these tests, the testing cycle has to be repeated before the application can be approved.

Parallel to PSM there is a similar Q&A process, but especially in C/C++, similar bugs are found as in pentesting.

pentest meme

Early Testing = Saving Resources

The example above illustrates how resource-consuming these late-stage bugs can be. Now imagine if 90% of them, especially the trivial ones, could be prevented during the early stages. This would minimize the effort required to fix them to a fraction and reduce the amount of vulnerabilities making it to the later stages. To achieve this, it is considered best-practice to introduce automated testing mechanisms early into the CI/CD pipelines. 

The most effective way to continuously test your application is with feedback-based fuzzing, an automated testing method that can be used throughout development, unit, and system testing. Tech leaders such as Google and Microsoft are already using this technology to find the majority of their bugs.

Fuzzing is not reserved to the likes of Google and Microsoft. Our platform CI Fuzz focusses on usability, which makes it easy to use, even for SMEs and non-tech companies. It offers an automated, fuzzing solution that can seamlessly be integrated into your CI/CD process and already includes the essential tooling. 

If you want to follow the lead of Google and Microsoft towards better, more secure software development, this is the way to go!

Get started

Recent Post

Share Article

Subscribe to updates

Feel free to leave us a comment.