🤖 AI Summary
This paper addresses the insufficient risk assessment of security vulnerabilities introduced by LLM-based programming assistants. We propose the first risk-aware evaluation framework integrating vulnerability severity, generation probability, and prompt exposure (PE)—a novel metric quantifying the susceptibility of vulnerabilities to adversarial prompting. We further introduce model exposure (ME) to measure vulnerability prevalence across models. Empirical analysis reveals that even for long-disclosed vulnerabilities, mainstream open-source code-generation models remain significantly susceptible, confirming a fundamental trade-off between security and functionality. Our contributions are threefold: (1) formal definition and empirical validation of the PE/ME dual-metric framework; (2) establishment of an actionable vulnerability prioritization mechanism grounded in quantitative risk estimation; and (3) identification of critical limitations in current security hardening techniques under realistic prompt distributions—thereby providing both theoretical foundations and practical guidance for targeted remediation of high-risk vulnerabilities.
📝 Abstract
As the role of Large Language Models (LLM)-based coding assistants in software development becomes more critical, so does the role of the bugs they generate in the overall cybersecurity landscape. While a number of LLM code security benchmarks have been proposed alongside approaches to improve the security of generated code, it remains unclear to what extent they have impacted widely used coding LLMs. Here, we show that even the latest open-weight models are vulnerable in the earliest reported vulnerability scenarios in a realistic use setting, suggesting that the safety-functionality trade-off has until now prevented effective patching of vulnerabilities. To help address this issue, we introduce a new severity metric that reflects the risk posed by an LLM-generated vulnerability, accounting for vulnerability severity, generation chance, and the formulation of the prompt that induces vulnerable code generation - Prompt Exposure (PE). To encourage the mitigation of the most serious and prevalent vulnerabilities, we use PE to define the Model Exposure (ME) score, which indicates the severity and prevalence of vulnerabilities a model generates.