May the Feedback Be with You! Unlocking the Power of Feedback-Driven Deep Learning Framework Fuzzing via LLMs

πŸ“… 2025-06-21
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Defects in deep learning frameworks pose severe security risks in safety-critical domains; however, existing fuzzing techniques underutilize multi-source feedback and suffer from coarse granularity and low automation. This paper proposes FUELβ€”the first feedback-driven fuzzing framework leveraging dual large language model (LLM) agents: an *analysis LLM* performs fine-grained interpretation of coverage, crashes, and anomalies, while a *generation LLM* evolves high-diversity test cases based on this feedback, enabling closed-loop, synergistic feedback utilization. FUEL overcomes the static and unidirectional nature of conventional fuzzing feedback mechanisms. Evaluated on PyTorch and TensorFlow, FUEL identified 104 vulnerabilities, including 93 previously unknown ones; 47 have been patched, and 5 have received CVE identifiers.

Technology Category

Application Category

πŸ“ Abstract
Artificial Intelligence (AI) Infrastructures, represented by Deep Learning (DL) frameworks, have served as fundamental DL systems over the last decade. However, the bugs in DL frameworks could lead to catastrophic consequences in some critical scenarios (e.g., healthcare and autonomous driving). A simple yet effective way to find bugs in DL frameworks is fuzz testing (Fuzzing). Unfortunately, existing fuzzing techniques have not comprehensively considered multiple types of feedback. Additionally, they analyze feedback in a coarse-grained manner, such as mutating the test cases only according to whether the coverage increases. Recently, researchers introduced Large Language Models (LLMs) into fuzzing. However, current LLM-based fuzzing techniques only focus on using LLMs to generate test cases while overlooking their potential to analyze feedback information, failing to create more valid and diverse test cases. To fill this gap, we propose FUEL to break the seal of Feedback-driven fuzzing for DL frameworks. The backbone of FUEL comprises two LLM-based agents, namely analysis LLM and generation LLM. Analysis LLM agent infers analysis summaries from feedback information, while the generation LLM agent creates tests guided by these analysis summaries. So far, FUEL has detected 104 bugs for PyTorch and TensorFlow, with 93 confirmed as new bugs, 47 already fixed, and 5 assigned with CVE IDs. Our work indicates that considering multiple types of feedback is beneficial to fuzzing performance, and leveraging LLMs to analyze feedback information is a promising direction. Our artifact is available at https://github.com/NJU-iSE/FUEL
Problem

Research questions and friction points this paper is trying to address.

Detecting bugs in DL frameworks via feedback-driven fuzzing
Enhancing fuzzing by leveraging LLMs for feedback analysis
Improving test case validity and diversity using LLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses LLMs to analyze feedback information
Combines analysis and generation LLM agents
Detects bugs in DL frameworks effectively
πŸ”Ž Similar Papers
No similar papers found.
Shaoyu Yang
Shaoyu Yang
Nanjing University
AI InfraFuzz TestingLarge Language ModelsCode IntelligenceMining Software Repositories
Chunrong Fang
Chunrong Fang
Software Institute, Nanjing University
Software TestingSoftware EngineeringComputer Science
H
Haifeng Lin
State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China
X
Xiang Chen
School of Artificial Intelligence and Computer Science, Nantong University, Nantong, China
Z
Zhenyu Chen
State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China