🤖 AI Summary
Existing automated exploit generation (AEG) approaches are hindered by insufficient path coverage, limited constraint-solving capabilities, and high false-positive rates from static analysis tools, which impede efficient real-world vulnerability validation. This work proposes Vulnsage, the first AEG framework integrating a multi-agent architecture with a constraint-guided reflection mechanism. Vulnsage orchestrates specialized agents for code analysis, exploit generation, verification, and reflective refinement in a closed-loop, self-optimizing pipeline. By synergistically combining static analysis, large language models, execution trace feedback, and runtime error diagnostics, Vulnsage achieves a 34.64% higher exploit generation success rate than state-of-the-art tools such as explodejs in real-world scenarios and successfully discovers and validates 146 zero-day vulnerabilities.
📝 Abstract
Open-source libraries are widely used in modern software development, introducing significant security vulnerabilities. While static analysis tools can identify potential vulnerabilities at scale, they often generate overwhelming reports with high false positive rates. Automated Exploit Generation (AEG) emerges as a promising solution to confirm vulnerability authenticity by generating an exploit. However, traditional AEG approaches based on fuzzing or symbolic execution face path coverage and constraint-solving problems. Although LLMs show great potential for AEG, how to effectively leverage them to comprehend vulnerabilities and generate corresponding exploits is still an open question.
To address these challenges, we propose Vulnsage, a multi-agent framework for AEG. Vulnsage simulates human security researchers' workflows by decomposing the complex AEG process into multiple specialized sub-agents: Code Analyzer Agent, Code Generation Agent, Validation Agent, and a set of Reflection Agents, orchestrated by a central supervisor through iterative cycles. Given a target program, the Code Analyzer Agent performs static analysis to identify potential vulnerabilities and collects relevant information for each one. The Code Generation Agent then utilizes an LLM to generate candidate exploits. The Validation Agent and Reflection Agents form a feedback-driven self-refinement loop that uses execution traces and runtime error analysis to either improve the exploit iteratively or reason about the false positive alert.
Experimental evaluation demonstrates that Vulnsage succeeds in generating 34.64\% more exploits than state-of-the-art tools such as \explodejs. Furthermore, Vulnsage has successfully discovered and verified 146 zero-day vulnerabilities in real-world scenarios, demonstrating its practical effectiveness for assisting security assessment in software supply chains.