🤖 AI Summary
To address the performance limitations of general-purpose large language models (LLMs) in cybersecurity—stemming from scarce domain-specific training data and inadequate representation of security knowledge—this paper introduces Foundation-Sec-8B: the first 8-billion-parameter specialized security LLM built upon the Llama 3.1 architecture, explicitly designed for red-teaming, blue-teaming, vulnerability analysis, and alignment with the MITRE ATT&CK framework. We construct a high-quality, multi-granularity cybersecurity corpus and perform domain-adaptive continual pretraining followed by knowledge-enhanced fine-tuning. Evaluated on CyberSecEval2, SecBench, and a novel ATT&CK reasoning benchmark, Foundation-Sec-8B significantly outperforms same-scale open models and achieves competitive performance on key tasks relative to 70B Llama 3.1 and GPT-4o-mini. The model weights and training corpus are fully open-sourced to accelerate AI-driven security adoption in governmental and enterprise settings.
📝 Abstract
As transformer-based large language models (LLMs) increasingly permeate society, they have revolutionized domains such as software engineering, creative writing, and digital arts. However, their adoption in cybersecurity remains limited due to challenges like scarcity of specialized training data and complexity of representing cybersecurity-specific knowledge. To address these gaps, we present Foundation-Sec-8B, a cybersecurity-focused LLM built on the Llama 3.1 architecture and enhanced through continued pretraining on a carefully curated cybersecurity corpus. We evaluate Foundation-Sec-8B across both established and new cybersecurity benchmarks, showing that it matches Llama 3.1-70B and GPT-4o-mini in certain cybersecurity-specific tasks. By releasing our model to the public, we aim to accelerate progress and adoption of AI-driven tools in both public and private cybersecurity contexts.