PATCHEVAL: A New Benchmark for Evaluating LLMs on Patching Real-World Vulnerabilities

πŸ“… 2025-11-14
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing vulnerability repair benchmarks suffer from critical limitations, including outdated vulnerabilities, narrow language coverage, unreliable patch validation, and poor reproducibility. To address these issues, this paper introduces PATCHEVALβ€”the first large-scale, multi-language (Go/JS/Python), automated repair evaluation benchmark grounded in real-world CVEs. Our approach systematically curates 1,000 high-quality CVEs from 2018–2023; constructs a reproducible sandbox environment integrating static analysis, dynamic security testing, and functional verification for dual-mode patch validation; and implements an end-to-end patch generation and evaluation pipeline powered by LLMs and intelligent agent frameworks. Comprehensive experiments across state-of-the-art models and agents reveal the practical effectiveness boundaries and fundamental bottlenecks of current Automated Vulnerability Repair (AVR) techniques. PATCHEVAL thus establishes a rigorous, empirically grounded benchmark to advance research and development in automated software repair.

Technology Category

Application Category

πŸ“ Abstract
Software vulnerabilities are increasing at an alarming rate. However, manual patching is both time-consuming and resource-intensive, while existing automated vulnerability repair (AVR) techniques remain limited in effectiveness. Recent advances in large language models (LLMs) have opened a new paradigm for AVR, demonstrating remarkable progress. To examine the capability of LLMs in AVR, several vulnerability benchmarks have been proposed recently. However, they still suffer from key limitations of outdated vulnerabilities, limited language coverage, unreliable patch validation, and insufficient reproducibility. To overcome these challenges, we introduce PATCHEVAL, a multilingual benchmark for Go, JavaScript, and Python, languages for which existing benchmarks remain unexplored. PATCHEVAL curates a dataset of 1,000 vulnerabilities drawn from CVEs reported between 2015 and 2025, covering 65 distinct CWEs. A subset of 230 CVEs is further equipped with runtime sandbox environments, enabling patch verification through both security tests and functionality tests. To provide a systematic comparison of LLM-based vulnerability repair, we evaluate a series of state-of-the-art LLMs and agents, presenting an in-depth analysis that empirically yields key insights to guide future research in AVR.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLMs' ability to patch real-world software vulnerabilities across multiple languages
Addressing limitations of outdated datasets and unreliable validation in existing benchmarks
Providing systematic comparison of LLM-based vulnerability repair with runtime verification
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multilingual benchmark covering Go, JavaScript, Python
Dataset with 1000 CVEs from 2015 to 2025
Runtime sandbox environments for patch verification
πŸ”Ž Similar Papers
No similar papers found.
Z
Zichao Wei
Huazhong University of Science and Technology
Jun Zeng
Jun Zeng
University of California, Berkeley
Robotics
M
Ming Wen
Huazhong University of Science and Technology
Z
Zeliang Yu
Huazhong University of Science and Technology
K
Kai Cheng
Huazhong University of Science and Technology
Y
Yiding Zhu
Huazhong University of Science and Technology
Jingyi Guo
Jingyi Guo
Huazhong University of Science and Technology
S
Shiqi Zhou
ByteDance
L
Le Yin
ByteDance
Xiaodong Su
Xiaodong Su
ByteDance
Z
Zhechao Ma
ByteDance