🤖 AI Summary
Internal threat detection (ITD) faces two core challenges: difficulty in interpreting semantic intent and modeling dynamic user behavior. Existing large language model (LLM)-based approaches suffer from limited prompt adaptability and insufficient multimodal behavioral coverage. To address these, we propose a semantic–behavioral bimodal joint modeling framework. First, we enhance semantic reasoning and behavioral awareness via instruction tuning and 4W-guided (Who/What/When/Where) behavioral sequence abstraction. Second, we design a LoRA-enhanced dual-path fine-tuning architecture, integrated with a lightweight MLP-based fusion decision module and a discriminative adaptation strategy (DMFI-B) to mitigate extreme class imbalance. Evaluated on the CERT r4.2 and r5.2 datasets, our method significantly outperforms state-of-the-art approaches, demonstrating both the efficacy of bimodal joint modeling and its practical deployability.
📝 Abstract
Insider threat detection (ITD) poses a persistent and high-impact challenge in cybersecurity due to the subtle, long-term, and context-dependent nature of malicious insider behaviors. Traditional models often struggle to capture semantic intent and complex behavior dynamics, while existing LLM-based solutions face limitations in prompt adaptability and modality coverage. To bridge this gap, we propose DMFI, a dual-modality framework that integrates semantic inference with behavior-aware fine-tuning. DMFI converts raw logs into two structured views: (1) a semantic view that processes content-rich artifacts (e.g., emails, https) using instruction-formatted prompts; and (2) a behavioral abstraction, constructed via a 4W-guided (When-Where-What-Which) transformation to encode contextual action sequences. Two LoRA-enhanced LLMs are fine-tuned independently, and their outputs are fused via a lightweight MLP-based decision module. We further introduce DMFI-B, a discriminative adaptation strategy that separates normal and abnormal behavior representations, improving robustness under severe class imbalance. Experiments on CERT r4.2 and r5.2 datasets demonstrate that DMFI outperforms state-of-the-art methods in detection accuracy. Our approach combines the semantic reasoning power of LLMs with structured behavior modeling, offering a scalable and effective solution for real-world insider threat detection. Our work demonstrates the effectiveness of combining LLM reasoning with structured behavioral modeling, offering a scalable and deployable solution for modern insider threat detection.