On Interaction Effects in Greybox Fuzzing

📅 2025-10-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In grey-box fuzzing, the order of mutation operators significantly impacts path coverage and vulnerability discovery. Prior approaches neglect inter-operator interactions, leading to suboptimal input generation. This paper proposes MuoFuzz, the first framework to model the conditional probability distribution over mutation operator sequences. It combines a lightweight linear probability model with a random-walk-based sequence sampling strategy to dynamically generate high-yield input sequences—without relying on handcrafted rules or reinforcement learning, and using only minimal runtime feedback. Evaluated on FuzzBench and MAGMA benchmarks, MuoFuzz achieves the highest code coverage among state-of-the-art fuzzers, outperforming AFL++ and MOPT. It successfully discovers four vulnerabilities missed by AFL++ and one previously undetected vulnerability overlooked by both AFL++ and MOPT. These results empirically validate the effectiveness and practicality of explicitly modeling sequential dependencies among mutation operators.

Technology Category

Application Category

📝 Abstract
A greybox fuzzer is an automated software testing tool that generates new test inputs by applying randomly chosen mutators (e.g., flipping a bit or deleting a block of bytes) to a seed input in random order and adds all coverage-increasing inputs to the corpus of seeds. We hypothesize that the order in which mutators are applied to a seed input has an impact on the effectiveness of greybox fuzzers. In our experiments, we fit a linear model to a dataset that contains the effectiveness of all possible mutator pairs and indeed observe the conjectured interaction effect. This points us to more efficient fuzzing by choosing the most promising mutator sequence with a higher likelihood. We propose MuoFuzz, a greybox fuzzer that learns and chooses the most promising mutator sequences. MuoFuzz learns the conditional probability that the next mutator will yield an interesting input, given the previously selected mutator. Then, it samples from the learned probability using a random walk to generate mutator sequences. We compare the performance of MuoFuzz to AFL++, which uses a fixed selection probability, and MOPT, which optimizes the selection probability of each mutator in isolation. Experimental results on the FuzzBench and MAGMA benchmarks show that MuoFuzz achieves the highest code coverage and finds four bugs missed by AFL++ and one missed by both AFL++ and MOPT.
Problem

Research questions and friction points this paper is trying to address.

Investigating mutator order impact on fuzzer effectiveness
Proposing MuoFuzz to learn promising mutator sequences
Improving code coverage and bug detection over AFL++
Innovation

Methods, ideas, or system contributions that make the work stand out.

Learns promising mutator sequences probabilistically
Models mutator interaction effects via linear regression
Uses random walk sampling for mutator selection
🔎 Similar Papers
No similar papers found.