CASTLE: Benchmarking Dataset for Static Code Analyzers and LLMs towards CWE Detection

📅 2025-03-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing source-code vulnerability detection tools—spanning static analyzers, large language models (LLMs), and formal verification tools—lack a unified, rigorous evaluation benchmark. Method: We introduce CASTLE, the first cross-paradigm evaluation framework, built upon a curated micro-benchmark dataset of 250 C programs covering 25 CWE categories. CASTLE enables fair, controlled comparison across diverse tool classes. Contribution/Results: We propose the CASTLE Score as a holistic evaluation metric; conduct the first systematic benchmarking of 13 static analyzers, 10 LLMs, and 2 formal verifiers; reveal that LLMs achieve F1 ≥ 0.85 on small code snippets but suffer from hallucination at scale; quantify ESBMC’s lowest false-positive rate yet limited coverage, while static tools exhibit high false positives overall. Our results demonstrate LLMs’ practical utility for real-time, vulnerability-preventive code completion.

Technology Category

Application Category

📝 Abstract
Identifying vulnerabilities in source code is crucial, especially in critical software components. Existing methods such as static analysis, dynamic analysis, formal verification, and recently Large Language Models are widely used to detect security flaws. This paper introduces CASTLE (CWE Automated Security Testing and Low-Level Evaluation), a benchmarking framework for evaluating the vulnerability detection capabilities of different methods. We assess 13 static analysis tools, 10 LLMs, and 2 formal verification tools using a hand-crafted dataset of 250 micro-benchmark programs covering 25 common CWEs. We propose the CASTLE Score, a novel evaluation metric to ensure fair comparison. Our results reveal key differences: ESBMC (a formal verification tool) minimizes false positives but struggles with vulnerabilities beyond model checking, such as weak cryptography or SQL injection. Static analyzers suffer from high false positives, increasing manual validation efforts for developers. LLMs perform exceptionally well in the CASTLE dataset when identifying vulnerabilities in small code snippets. However, their accuracy declines, and hallucinations increase as the code size grows. These results suggest that LLMs could play a pivotal role in future security solutions, particularly within code completion frameworks, where they can provide real-time guidance to prevent vulnerabilities. The dataset is accessible at https://github.com/CASTLE-Benchmark.
Problem

Research questions and friction points this paper is trying to address.

Evaluates vulnerability detection in source code using CASTLE framework.
Compares 13 static analyzers, 10 LLMs, and 2 formal verification tools.
Proposes CASTLE Score for fair comparison of vulnerability detection methods.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces CASTLE benchmarking framework for vulnerability detection
Proposes CASTLE Score for fair evaluation of tools
Assesses static analyzers, LLMs, and formal verification tools
🔎 Similar Papers
No similar papers found.
Richard A. Dubniczky
Richard A. Dubniczky
PhD Student, ELTE
cybersecuritycryptographyaiweb services
K
Krisztofer Zolt'an Horv'at
Eötvös Loránd University (ELTE), Budapest, Hungary
T
Tamás Bisztray
University of Oslo, Oslo, Norway; Cyentific AS, Oslo, Norway
M
M. Ferrag
Guelma University, Guelma, Algeria
Lucas C. Cordeiro
Lucas C. Cordeiro
Professor of Computer Science, University of Manchester
Formal MethodsAutomated VerificationSoftware TestingProgram SynthesisSoftware Security
N
Norbert Tihanyi
Eötvös Loránd University (ELTE), Budapest, Hungary; Technology Innovation Institute (TII), Abu Dhabi, UAE