Execution-State-Aware LLM Reasoning for Automated Proof-of-Vulnerability Generation

📅 2026-02-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current large language models (LLMs) struggle to satisfy complex semantic constraints and lack awareness of program execution states when automatically generating proofs of vulnerability (PoVs), limiting their generation accuracy. This work proposes DrillAgent, a novel framework that formulates PoV generation as an iterative hypothesis–verification–refinement process. DrillAgent uniquely integrates the semantic reasoning capabilities of LLMs with dynamic execution feedback by mapping low-level execution traces into source-level constraints, enabling closed-loop, incremental exploit synthesis. Experimental results on the SEC-bench benchmark demonstrate that DrillAgent significantly outperforms existing LLM-based agent approaches, solving 52.8% more real-world CVE tasks under a fixed computational budget.

Technology Category

Application Category

📝 Abstract
Proof-of-Vulnerability (PoV) generation is a critical task in software security, serving as a cornerstone for vulnerability validation, false positive reduction, and patch verification. While directed fuzzing effectively drives path exploration, satisfying complex semantic constraints remains a persistent bottleneck in automated exploit generation. Large Language Models (LLMs) offer a promising alternative with their semantic reasoning capabilities; however, existing LLM-based approaches lack sufficient grounding in concrete execution behavior, limiting their ability to generate precise PoVs. In this paper, we present DrillAgent, an agentic framework that reformulates PoV generation as an iterative hypothesis-verification-refinement process. To bridge the gap between static reasoning and dynamic execution, DrillAgent synergizes LLM-based semantic inference with feedback from concrete program states. The agent analyzes the target code to hypothesize inputs, observes execution behavior, and employs a novel mechanism to translate low-level execution traces into source-level constraints. This closed-loop design enables the agent to incrementally align its input generation with the precise requirements of the vulnerability. We evaluate DrillAgent on SEC-bench, a large-scale benchmark of real-world C/C++ vulnerabilities. Experimental results show that DrillAgent substantially outperforms state-of-the-art LLM agent baselines under fixed budget constraints, solving up to 52.8% more CVE tasks than the best-performing baseline. These results highlight the necessity of execution-state-aware reasoning for reliable PoV generation in complex software systems.
Problem

Research questions and friction points this paper is trying to address.

Proof-of-Vulnerability
Large Language Models
Execution-State-Aware
Automated Exploit Generation
Semantic Constraints
Innovation

Methods, ideas, or system contributions that make the work stand out.

execution-state-aware reasoning
Proof-of-Vulnerability generation
LLM agent
dynamic feedback integration
constraint translation
🔎 Similar Papers
No similar papers found.