🤖 AI Summary
This study addresses the limited interpretability of existing intrusion detection models in software-defined networks, which hinders the deployment of large language models (LLMs) in safety-critical security applications. To bridge this gap, the work introduces attribution analysis into encoder-based LLMs—such as Transformers—for network flow–level intrusion detection. By integrating traffic feature modeling with attribution techniques, the approach reveals the key behavioral patterns underpinning model decisions. Experimental results demonstrate that the derived attributions align closely with established attack mechanisms, confirming that the model not only effectively learns discriminative attack features but also exhibits strong transparency and trustworthiness. This advancement offers a novel pathway toward interpretable applications of large language models in cybersecurity.
📝 Abstract
Software-Defined Networking (SDN) improves network flexibility but also increases the need for reliable and interpretable intrusion detection. Large Language Models (LLMs) have recently been explored for cybersecurity tasks due to their strong representation learning capabilities; however, their lack of transparency limits their practical adoption in security-critical environments. Understanding how LLMs make decisions is therefore essential. This paper presents an attribution-driven analysis of encoder-based LLMs for network intrusion detection using flow-level traffic features. Attribution analysis demonstrates that model decisions are driven by meaningful traffic behavior patterns, improving transparency and trust in transformer-based SDN intrusion detection. These patterns align with established intrusion detection principles, indicating that LLMs learn attack behavior from traffic dynamics. This work demonstrates the value of attribution methods for validating and trusting LLM-based security analysis.