R1-Fuzz: Specializing Language Models for Textual Fuzzing via Reinforcement Learning

📅 2025-09-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Fuzz testing complex text-processing systems (e.g., compilers, interpreters) faces challenges in satisfying strong syntactic/semantic constraints and achieving deep logical coverage. To address this, we propose R1-Fuzz—the first reinforcement learning (RL)-based framework for domain-specific fine-tuning of language models (LMs) for fuzzing. R1-Fuzz innovatively employs *coverage slicing* to formulate targeted generation tasks and introduces a *distance-aware reward mechanism* to guide small LMs (e.g., 7B-parameter models) in efficiently modeling program semantics and constraints—eliminating reliance on large foundation models and substantially reducing computational overhead. Experimental evaluation on real-world systems shows that R1-Fuzz-7B improves code coverage by 75% over state-of-the-art fuzzers and discovers 29 previously unknown vulnerabilities. These results demonstrate the effectiveness and practicality of RL-based fine-tuning of compact LMs for semantics-driven, deep-coverage fuzz testing.

Technology Category

Application Category

📝 Abstract
Fuzzing is effective for vulnerability discovery but struggles with complex targets such as compilers, interpreters, and database engines, which accept textual input that must satisfy intricate syntactic and semantic constraints. Although language models (LMs) have attracted interest for this task due to their vast latent knowledge and reasoning potential, their practical adoption has been limited. The major challenges stem from insufficient exploration of deep program logic among real-world codebases, and the high cost of leveraging larger models. To overcome these challenges, we propose R1-Fuzz, the first framework that leverages reinforcement learning (RL) to specialize cost-efficient LMs and integrate them for complex textual fuzzing input generation. R1-Fuzz introduces two key designs: coverage-slicing-based question construction and a distance-based reward calculation. Through RL-based post-training of a model with our constructed dataset, R1-Fuzz designs a fuzzing workflow that tightly integrates LMs to reason deep program semantics during fuzzing. Evaluations on diverse real-world targets show that our design enables a small model, named R1-Fuzz-7B, to rival or even outperform much larger models in real-world fuzzing. Notably, R1-Fuzz achieves up to 75% higher coverage than state-of-the-art fuzzers and discovers 29 previously unknown vulnerabilities, demonstrating its practicality.
Problem

Research questions and friction points this paper is trying to address.

Specializing language models for fuzzing complex software with textual inputs
Overcoming limited exploration of deep program logic in real-world codebases
Reducing the high cost of leveraging large models for vulnerability discovery
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses reinforcement learning to specialize language models
Introduces coverage-slicing question construction for fuzzing
Implements distance-based reward calculation for training
Jiayi Lin
Jiayi Lin
Queen Mary University of London
Computer Vision
Liangcai Su
Liangcai Su
The University of Hong Kong, Tsinghua University
Data MiningLarge Language ModelsDeep Research Agents
J
Junzhe Li
The University of Hong Kong, Hong Kong SAR, China
C
Chenxiong Qian
The University of Hong Kong, Hong Kong SAR, China