🤖 AI Summary
Existing evaluations of large language models (LLMs) lack fine-grained, cognitively grounded assessment of their capabilities in software security—a domain demanding rigorous reasoning across abstraction levels.
Method: We systematically evaluate five mainstream LLMs using a Bloom’s taxonomy–informed framework spanning six cognitive levels—remembering, understanding, applying, analyzing, evaluating, and creating—and integrate diverse real-world data sources, including SALLM, XBOW, curriculum-based exam questions, and engineering tasks.
Contribution/Results: We introduce the novel concept of “software security knowledge boundary” to quantify LLMs’ upper limits in high-order reasoning; identify 51 cross-level, systematic security misconception patterns; and find that while models perform well on low-level tasks (e.g., vulnerability identification), performance degrades markedly on architecture-level security assessment and secure system construction. Their knowledge boundary is consistently confined between the “applying” and “analyzing” levels, with unstable attainment of “evaluating” and “creating” competencies.
📝 Abstract
Large language models (LLMs) are increasingly used in software development, but their level of software security expertise remains unclear. This work systematically evaluates the security comprehension of five leading LLMs: GPT-4o-Mini, GPT-5-Mini, Gemini-2.5-Flash, Llama-3.1, and Qwen-2.5, using Blooms Taxonomy as a framework. We assess six cognitive dimensions: remembering, understanding, applying, analyzing, evaluating, and creating. Our methodology integrates diverse datasets, including curated multiple-choice questions, vulnerable code snippets (SALLM), course assessments from an Introduction to Software Security course, real-world case studies (XBOW), and project-based creation tasks from a Secure Software Engineering course. Results show that while LLMs perform well on lower-level cognitive tasks such as recalling facts and identifying known vulnerabilities, their performance degrades significantly on higher-order tasks that require reasoning, architectural evaluation, and secure system creation. Beyond reporting aggregate accuracy, we introduce a software security knowledge boundary that identifies the highest cognitive level at which a model consistently maintains reliable performance. In addition, we identify 51 recurring misconception patterns exhibited by LLMs across Blooms levels.