TOSSS: a CVE-based Software Security Benchmark for Large Language Models

📅 2026-03-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the critical concern that large language models (LLMs) may inadvertently introduce or weaken security safeguards in software development by proposing the first scalable benchmark for evaluating secure code generation grounded in real-world CVE vulnerability data. The benchmark employs a two-option comparative design covering C/C++ and Java, leveraging a binary (0–1) scoring mechanism to quantitatively assess an LLM’s ability to distinguish between secure and vulnerable code snippets. It is designed for continuous integration of newly disclosed vulnerabilities, ensuring long-term relevance. The authors evaluate 14 prominent open- and closed-source LLMs, whose scores range from 0.48 to 0.89, thereby establishing the first systematic, extensible framework for quantifying LLM security behavior and addressing a significant gap in existing security evaluation methodologies.

Technology Category

Application Category

📝 Abstract
With their increasing capabilities, Large Language Models (LLMs) are now used across many industries. They have become useful tools for software engineers and support a wide range of development tasks. As LLMs are increasingly used in software development workflows, a critical question arises: are LLMs good at software security? At the same time, organizations worldwide invest heavily in cybersecurity to reduce exposure to disruptive attacks. The integration of LLMs into software engineering workflows may introduce new vulnerabilities and weaken existing security efforts. We introduce TOSSS (Two-Option Secure Snippet Selection), a benchmark that measures the ability of LLMs to choose between secure and vulnerable code snippets. Existing security benchmarks for LLMs cover only a limited range of vulnerabilities. In contrast, TOSSS relies on the CVE database and provides an extensible framework that can integrate newly disclosed vulnerabilities over time. Our benchmark gives each model a security score between 0 and 1 based on its behavior; a score of 1 indicates that the model always selects the secure snippet, while a score of 0 indicates that it always selects the vulnerable one. We evaluate 14 widely used open-source and closed-source models on C/C++ and Java code and observe scores ranging from 0.48 to 0.89. LLM providers already publish many benchmark scores for their models, and TOSSS could become a complementary security-focused score to include in these reports.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Software Security
CVE
Code Vulnerability
Security Benchmark
Innovation

Methods, ideas, or system contributions that make the work stand out.

TOSSS
CVE-based benchmark
LLM security evaluation
secure code selection
extensible vulnerability framework
🔎 Similar Papers
No similar papers found.
M
Marc Damie
University of Twente
Murat Bilgehan Ertan
Murat Bilgehan Ertan
CWI, Vrije Universiteit Amsterdam
machine learningcomputer securityprivacy
D
Domenico Essoussi
Erasmus University Rotterdam
A
Angela Makhanu
Datadog
G
Gaëtan Peter
Ecole Supérieure d’Ingénieurs Léonard de Vinci
R
Roos Wensveen
Leiden University