Assessing the Software Security Comprehension of Large Language Models

📅 2025-12-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing evaluations of large language models (LLMs) lack fine-grained, cognitively grounded assessment of their capabilities in software security—a domain demanding rigorous reasoning across abstraction levels. Method: We systematically evaluate five mainstream LLMs using a Bloom’s taxonomy–informed framework spanning six cognitive levels—remembering, understanding, applying, analyzing, evaluating, and creating—and integrate diverse real-world data sources, including SALLM, XBOW, curriculum-based exam questions, and engineering tasks. Contribution/Results: We introduce the novel concept of “software security knowledge boundary” to quantify LLMs’ upper limits in high-order reasoning; identify 51 cross-level, systematic security misconception patterns; and find that while models perform well on low-level tasks (e.g., vulnerability identification), performance degrades markedly on architecture-level security assessment and secure system construction. Their knowledge boundary is consistently confined between the “applying” and “analyzing” levels, with unstable attainment of “evaluating” and “creating” competencies.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) are increasingly used in software development, but their level of software security expertise remains unclear. This work systematically evaluates the security comprehension of five leading LLMs: GPT-4o-Mini, GPT-5-Mini, Gemini-2.5-Flash, Llama-3.1, and Qwen-2.5, using Blooms Taxonomy as a framework. We assess six cognitive dimensions: remembering, understanding, applying, analyzing, evaluating, and creating. Our methodology integrates diverse datasets, including curated multiple-choice questions, vulnerable code snippets (SALLM), course assessments from an Introduction to Software Security course, real-world case studies (XBOW), and project-based creation tasks from a Secure Software Engineering course. Results show that while LLMs perform well on lower-level cognitive tasks such as recalling facts and identifying known vulnerabilities, their performance degrades significantly on higher-order tasks that require reasoning, architectural evaluation, and secure system creation. Beyond reporting aggregate accuracy, we introduce a software security knowledge boundary that identifies the highest cognitive level at which a model consistently maintains reliable performance. In addition, we identify 51 recurring misconception patterns exhibited by LLMs across Blooms levels.
Problem

Research questions and friction points this paper is trying to address.

Evaluates LLMs' software security knowledge across cognitive levels
Identifies performance gaps in reasoning and secure system creation
Analyzes recurring security misconceptions in large language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Systematically evaluated LLMs' security comprehension using Bloom's Taxonomy
Integrated diverse datasets including real-world case studies and project tasks
Introduced a software security knowledge boundary to identify reliable performance limits
🔎 Similar Papers
No similar papers found.
Mohammed Latif Siddiq
Mohammed Latif Siddiq
PhD Candidate, Computer Science & Engineering, University of Notre Dame
Software EngineeringSoftware SecurityApplied Machine LearningCode generation
N
Natalie Sekerak
Computer Science and Engineering, University of Notre Dame, Holy Cross Drive, Notre Dame, 46556, IN, USA.
A
Antonio Karam
Computer Science and Engineering, University of Notre Dame, Holy Cross Drive, Notre Dame, 46556, IN, USA.
M
Maria Leal
Computer Science and Engineering, University of Notre Dame, Holy Cross Drive, Notre Dame, 46556, IN, USA.
A
Arvin Islam-Gomes
Computer Science and Engineering, University of Notre Dame, Holy Cross Drive, Notre Dame, 46556, IN, USA.
Joanna C. S. Santos
Joanna C. S. Santos
Assistant Professor, University of Notre Dame
Software SecurityProgram AnalysisSoftware EngineeringCode GenerationSoftware Architecture