🤖 AI Summary
Fuzz testing complex text-processing systems (e.g., compilers, interpreters) faces challenges in satisfying strong syntactic/semantic constraints and achieving deep logical coverage. To address this, we propose R1-Fuzz—the first reinforcement learning (RL)-based framework for domain-specific fine-tuning of language models (LMs) for fuzzing. R1-Fuzz innovatively employs *coverage slicing* to formulate targeted generation tasks and introduces a *distance-aware reward mechanism* to guide small LMs (e.g., 7B-parameter models) in efficiently modeling program semantics and constraints—eliminating reliance on large foundation models and substantially reducing computational overhead. Experimental evaluation on real-world systems shows that R1-Fuzz-7B improves code coverage by 75% over state-of-the-art fuzzers and discovers 29 previously unknown vulnerabilities. These results demonstrate the effectiveness and practicality of RL-based fine-tuning of compact LMs for semantics-driven, deep-coverage fuzz testing.
📝 Abstract
Fuzzing is effective for vulnerability discovery but struggles with complex targets such as compilers, interpreters, and database engines, which accept textual input that must satisfy intricate syntactic and semantic constraints. Although language models (LMs) have attracted interest for this task due to their vast latent knowledge and reasoning potential, their practical adoption has been limited. The major challenges stem from insufficient exploration of deep program logic among real-world codebases, and the high cost of leveraging larger models. To overcome these challenges, we propose R1-Fuzz, the first framework that leverages reinforcement learning (RL) to specialize cost-efficient LMs and integrate them for complex textual fuzzing input generation. R1-Fuzz introduces two key designs: coverage-slicing-based question construction and a distance-based reward calculation. Through RL-based post-training of a model with our constructed dataset, R1-Fuzz designs a fuzzing workflow that tightly integrates LMs to reason deep program semantics during fuzzing. Evaluations on diverse real-world targets show that our design enables a small model, named R1-Fuzz-7B, to rival or even outperform much larger models in real-world fuzzing. Notably, R1-Fuzz achieves up to 75% higher coverage than state-of-the-art fuzzers and discovers 29 previously unknown vulnerabilities, demonstrating its practicality.