Christian Holler is Staff Security Engineer at Mozilla. Over the past ten years, he has gained quite a record in fuzzing browsers. His primary focus is to ensure the security of Mozilla Firefox. He is one of the leading experts in Grammar-Based Fuzzing. As a team lead, Christian is well-acquainted with the human components in automated bug finding.
Psychological effects, emotions and group dynamics play a crucial role in software development, especially when it comes to automated fuzz testing. In this article, Christian points out these phenomena and shows us how to make development and security teams work together.
During my nearly ten years at Mozilla, I often found myself somewhere between the security testing team and the development teams. Throughout this time, I was able to learn many valuable lessons. One of the biggest challenges I have faced, was establishing trust between those two parties.
By nature, the relationship between development and security teams is often prone to bad blood. But with empathy and appreciation, you can bring them together to work as a unit. In this article, I want to share my experiences to help you improve this relationship.
For each of these components, there is at least one specialized team. Their domain knowledge allows them to better understand the code architecture and they are more familiar with the contracts between the different modules. As a result, each team knows their code’s expected behavior and potential weaknesses best.
Nonetheless, each of the individual browser components requires a thorough security testing strategy. Fuzzing is a highly effective way to achieve this. However, to perform well, fuzzing requires each team’s domain knowledge. This is just one reason why the traditional "lone-warrior fuzzing" approach is no longer sufficient in this scenario. What we need instead is a mutual trust relationship.
There is a need for a positive working relationship between the development and the security testing team. This observation does not only apply to our case at Mozilla - It is a problem that concerns managers and tech-leads in many industries.
So, please avoid fuzzing ninja-style. Don’t build a fuzzer on your own and in secret just to rapid-fire bugs at your development team. This approach is outdated. It might help you feel superior for a while, but not only will it deteriorate the relationship between you and your fellow developers, it is also the least effective way to build a fuzzer. If you can get your developers’ help, together you will achieve much more.
Development teams often suffer from a lack of resources. Dumping a massive load of bugs on their desk can lead to defensive behavior. Developers will feel overwhelmed and perceive these bugs as “extra work” caused by you and your security team. Merely digging up errors in a project that someone else put diligence and effort into can create a lot of tension.
Don’t show off your findings. Instead, try to understand the developer perspective. They might have a different set of priorities than you. It is crucial for a healthy relationship between security teams and development, that both teams are involved in the testing process. This will also help the development team comprehend your findings better.
To avoid defensive behavior, share previous success stories with the developers. These examples will help them understand your motivation and priorities. Also, make sure to offer your helping hand and show genuine interest in their projects. A relationship is always based on reciprocity. If you communicate more, you might find some valuable hints on where to search for vulnerabilities and errors. This might save your security team a lot of time. It will also help developers understand that your work will help them overall.
Another aspect here is expectations: It is not uncommon for development teams to expect things from fuzzing that fuzzing cannot deliver and the other way around. To avoid conflicts later on, make sure you define goals together and educate each other on requirements early on.
A fuzz blocker is a bug that is disruptive to your fuzzing operations. Usually, it is a highly frequent bug that consumes many fuzzing resources. At the same time, it is hard to avoid or work around. Therefore, such bugs are of the highest priority for the security team. However, developers usually see fuzz blockers as low-priority bugs. And they are right, since they generally have little to no impact on the usability of an application. Here we have a prime example for very different priorities. It becomes even more critical if you consider that fuzz blockers often occur at the beginning of a testing process. Especially if there is no trust relationship established yet, dealing with this situation properly is vital: If the fuzzing team’s first response after deploying fuzzing is just fuzz blockers, some developers might think that fuzzing will only find “unimportant” bugs. This can ruin the first impression. Educate your developers about this phenomenon!
Instead of feeding fuzz blockers back to the developers right away, I recommend writing the first fix yourself. This way, you will learn something about the project at hand. Being familiar with the codebase is especially useful if you are planning to implement continuous fuzzing on it. You will also learn more about the code’s development and debugging requirements. This might help you improve the bug-fixing process for fuzz bugs in the future.
Additionally, If you manage to fix all the fuzz blockers locally, you can continue fuzzing faster. If you share your patch with the developers, ask them to look over it. They will surely appreciate your effort, even if the patch is not perfect.
Implementing fuzzing in a late stage of the development lifecycle can have a devastating psychological effect. Developers often spend months or even years on a release. Starting with fuzz testing right before production is the worst way to do it. Coming out of the blue and shooting 20 bugs back at the developers after such a long development time can be very discouraging.
To avoid this, you should start fuzz testing as soon as the project is ready for it. But when is a project ready? Again, it is imperative for you to communicate with developers on this. Sometimes, very early versions of a project have too many known issues, and fuzzing would not contribute much. In general, fuzzing should start at the earliest version that developers consider stable enough for testing. This will ensure you catch bugs early on and give your team some time to resolve issues around fuzzing, while driving up its effectiveness.
To fix bugs in early stages of the development process, development teams can rely on CI/CD-agnostic platforms for automated security testing such as CI Fuzz. This way dev teams can find up to 80% of their bugs in development, that would otherwise be found later in acceptance tests.
Sometimes it can even be helpful to fuzz experimental features. But again, you should consult with the developers first. Developers usually implement experimental features behind switches for a reason. At this stage, they are probably already aware of some bugs and vulnerabilities. Pointing them out by turning on switches without further consultation is usually not too well received.
Try to make the security testing process transparent to the developers. Provide them with insights on how fuzzing actually works and regularly share updates on progress, e.g. through code coverage. Make the use of required tools (e.g. to reproduce your bugs) as easy as possible for developers. Also, make sure they understand your intentions and ask them if they have everything they need to comprehend what you are doing. In the end, educating them will help you implement early-stage security testing practices.
“At the end of the day, fuzzing is teamwork.”
– Christian Holler
This last point sums up the essence of this entire article. Security teams should involve developers in their testing processes from start to finish. As a company, you can achieve so much more if your teams work together instead of against each other. At the end of the day, fuzzing is teamwork.
Christian Holler, also known as decoder, is currently working as a Staff Security Engineer at Mozilla. His main focus lies with fuzzing and other dynamic and static analysis methods. During his ten years at Mozilla, he has accumulated extensive experience, working with both software development and security professionals.