🤖 AI Summary
This study presents the first empirical evaluation of large language models’ (LLMs) capability to automatically generate web vulnerability proof-of-concept (PoC) code from publicly disclosed vulnerability information—including CVE descriptions, patches, and contextual source code. We propose a multi-stage prompting framework coupled with a code-aware adaptive reasoning strategy, and systematically compare function-level versus file-level contextual inputs, finding the former significantly improves generation quality. Using GPT-4o and DeepSeek-R1, baseline end-to-end PoC generation success rates range from 8% to 34%; incorporating fine-grained contextual inputs increases performance to 25%–54%, and further integrating adaptive reasoning achieves 68%–72%. Notably, 23 newly generated PoCs have been formally accepted into the National Vulnerability Database (NVD) and Exploit Database, demonstrating both methodological efficacy and practical utility in security research and vulnerability disclosure workflows.
📝 Abstract
Recent advances in Large Language Models (LLMs) have brought remarkable progress in code understanding and reasoning, creating new opportunities and raising new concerns for software security. Among many downstream tasks, generating Proof-of-Concept (PoC) exploits plays a central role in vulnerability reproduction, comprehension, and mitigation. While previous research has focused primarily on zero-day exploitation, the growing availability of rich public information accompanying disclosed CVEs leads to a natural question: can LLMs effectively use this information to automatically generate valid PoCs? In this paper, we present the first empirical study of LLM-based PoC generation for web application vulnerabilities, focusing on the practical feasibility of leveraging publicly disclosed information. We evaluate GPT-4o and DeepSeek-R1 on 100 real-world and reproducible CVEs across three stages of vulnerability disclosure: (1) newly disclosed vulnerabilities with only descriptions, (2) 1-day vulnerabilities with patches, and (3) N-day vulnerabilities with full contextual code. Our results show that LLMs can automatically generate working PoCs in 8%-34% of cases using only public data, with DeepSeek-R1 consistently outperforming GPT-4o. Further analysis shows that supplementing code context improves success rates by 17%-20%, with function-level providing 9%-13% improvement than file-level ones. Further integrating adaptive reasoning strategies to prompt refinement significantly improves success rates to 68%-72%. Our findings suggest that LLMs could reshape vulnerability exploitation dynamics. To date, 23 newly generated PoCs have been accepted by NVD and Exploit DB.