๐ค AI Summary
Existing security evaluation benchmarks for large code generation models suffer from three key limitations: narrow coverage of security risks and capabilities, overreliance on static metrics (e.g., LLM-based classification or rule-based matching), and inherent trade-offs between dataset quality and scale. This paper introduces SecCodePLTโthe first unified evaluation platform targeting two core threats: insecure coding practices and cyberattack assistance. Methodologically, it innovatively combines domain-expert validation with automated synthesis to generate high-quality, executable test cases; further, it pioneers dynamic execution of attack chains within realistic sandbox environments, enabling quantitative measurement of multi-dimensional security metrics. Compared to the state-of-the-art benchmark CyberSecEval, SecCodePLT achieves significantly higher security relevance, uncovering non-trivial vulnerabilities in leading code-generation models and Cursor-like agents. It establishes the first end-to-end security evaluation standard specifically designed for code-generative AI.
๐ Abstract
Existing works have established multiple benchmarks to highlight the security risks associated with Code GenAI. These risks are primarily reflected in two areas: a model potential to generate insecure code (insecure coding) and its utility in cyberattacks (cyberattack helpfulness). While these benchmarks have made significant strides, there remain opportunities for further improvement. For instance, many current benchmarks tend to focus more on a model ability to provide attack suggestions rather than its capacity to generate executable attacks. Additionally, most benchmarks rely heavily on static evaluation metrics, which may not be as precise as dynamic metrics such as passing test cases. Conversely, expert-verified benchmarks, while offering high-quality data, often operate at a smaller scale. To address these gaps, we develop SecCodePLT, a unified and comprehensive evaluation platform for code GenAIs' risks. For insecure code, we introduce a new methodology for data creation that combines experts with automatic generation. Our methodology ensures the data quality while enabling large-scale generation. We also associate samples with test cases to conduct code-related dynamic evaluation. For cyberattack helpfulness, we set up a real environment and construct samples to prompt a model to generate actual attacks, along with dynamic metrics in our environment. We conduct extensive experiments and show that SecCodePLT outperforms the state-of-the-art (SOTA) benchmark CyberSecEval in security relevance. Furthermore, it better identifies the security risks of SOTA models in insecure coding and cyberattack helpfulness. Finally, we apply SecCodePLT to the SOTA code agent, Cursor, and, for the first time, identify non-trivial security risks in this advanced coding agent.