π€ AI Summary
Existing vulnerability repair benchmarks suffer from critical limitations, including outdated vulnerabilities, narrow language coverage, unreliable patch validation, and poor reproducibility. To address these issues, this paper introduces PATCHEVALβthe first large-scale, multi-language (Go/JS/Python), automated repair evaluation benchmark grounded in real-world CVEs. Our approach systematically curates 1,000 high-quality CVEs from 2018β2023; constructs a reproducible sandbox environment integrating static analysis, dynamic security testing, and functional verification for dual-mode patch validation; and implements an end-to-end patch generation and evaluation pipeline powered by LLMs and intelligent agent frameworks. Comprehensive experiments across state-of-the-art models and agents reveal the practical effectiveness boundaries and fundamental bottlenecks of current Automated Vulnerability Repair (AVR) techniques. PATCHEVAL thus establishes a rigorous, empirically grounded benchmark to advance research and development in automated software repair.
π Abstract
Software vulnerabilities are increasing at an alarming rate. However, manual patching is both time-consuming and resource-intensive, while existing automated vulnerability repair (AVR) techniques remain limited in effectiveness. Recent advances in large language models (LLMs) have opened a new paradigm for AVR, demonstrating remarkable progress. To examine the capability of LLMs in AVR, several vulnerability benchmarks have been proposed recently. However, they still suffer from key limitations of outdated vulnerabilities, limited language coverage, unreliable patch validation, and insufficient reproducibility. To overcome these challenges, we introduce PATCHEVAL, a multilingual benchmark for Go, JavaScript, and Python, languages for which existing benchmarks remain unexplored. PATCHEVAL curates a dataset of 1,000 vulnerabilities drawn from CVEs reported between 2015 and 2025, covering 65 distinct CWEs. A subset of 230 CVEs is further equipped with runtime sandbox environments, enabling patch verification through both security tests and functionality tests. To provide a systematic comparison of LLM-based vulnerability repair, we evaluate a series of state-of-the-art LLMs and agents, presenting an in-depth analysis that empirically yields key insights to guide future research in AVR.