🤖 AI Summary
This work addresses the critical gap in existing vulnerability datasets—the lack of multilingual code examples explicitly linked to CAPEC/CWE standards—which hinders both vulnerability comprehension and the development of robust security models. To bridge this gap, the study presents the first systematic integration of the CAPEC/CWE knowledge framework with large language model (LLM) generation techniques, leveraging GPT-4o, Llama, and Claude. Through carefully engineered prompts and a rigorous consistency validation mechanism, the authors construct a large-scale vulnerable code dataset encompassing 615 attack patterns across three major programming languages. The generated code exhibits high fidelity, with inter-model cosine similarity reaching 0.98 and strong accuracy, thereby providing a reliable foundation for training and evaluating vulnerability detection and repair models.
📝 Abstract
The increasing complexity and volume of software systems have heightened the importance of identifying and mitigating security vulnerabilities. The existing software vulnerability datasets frequently fall short in providing comprehensive, detailed code snippets explicitly linked to specific vulnerability descriptions, reducing their utility for advanced research and hindering efforts to develop a deeper understanding of security vulnerabilities. To address this challenge, we present a novel dataset that provides examples of vulnerable code snippets corresponding to Common Attack Pattern Enumerations and Classifications (CAPEC) and Common Weakness Enumeration (CWE) descriptions. By employing the capabilities of Generative Pre-trained Transformer (GPT) models, we have developed a robust methodology for generating these examples. Our approach utilizes GPT-4o, Llama and Claude models to generate code snippets that exhibit specific vulnerabilities as described in CAPEC and CWE documentation. This dataset not only enhances the understanding of security vulnerabilities in code but also serves as a valuable resource for training machine learning models focused on automatic vulnerability detection and remediation. Preliminary evaluations suggest that the dataset generated by Large Language Models demonstrates high accuracy and can serve as a reliable reference for vulnerability identification systems. We found consistent results across the three models, with 0.98 cosine similarity among codes. The final dataset comprises 615 CAPEC code snippets in three programming languages: Java, Python, and JavaScript, making it one of the most extensive and diverse resources in this domain.