Insight-LLM: LLM-enhanced Multi-view Fusion in Insider Threat Detection

📅 2025-09-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing internal threat detection (ITD) methods predominantly rely on single-view modeling, rendering them inadequate for capturing the sparsity and heterogeneity of user behavior. While multi-view approaches offer potential, they suffer from poor scalability due to tight coupling among submodels, cross-view semantic misalignment, and modality imbalance. To address these challenges, this paper proposes the first modular multi-view fusion framework tailored for ITD. It introduces frozen large language models (LLMs) into ITD—novel in this domain—leveraging learnable prompts to achieve cross-view semantic alignment. Additionally, a dynamic weighted fusion mechanism is designed to mitigate modality imbalance. The framework is lightweight in parameter count and highly efficient at inference time. Evaluated on public benchmarks, it achieves state-of-the-art performance while significantly reducing false positive rates.

Technology Category

Application Category

📝 Abstract
Insider threat detection (ITD) requires analyzing sparse, heterogeneous user behavior. Existing ITD methods predominantly rely on single-view modeling, resulting in limited coverage and missed anomalies. While multi-view learning has shown promise in other domains, its direct application to ITD introduces significant challenges: scalability bottlenecks from independently trained sub-models, semantic misalignment across disparate feature spaces, and view imbalance that causes high-signal modalities to overshadow weaker ones. In this work, we present Insight-LLM, the first modular multi-view fusion framework specifically tailored for insider threat detection. Insight-LLM employs frozen, pre-nes, achieving state-of-the-art detection with low latency and parameter overhead.
Problem

Research questions and friction points this paper is trying to address.

Detecting insider threats from sparse heterogeneous user behavior
Overcoming single-view modeling limitations in anomaly detection
Addressing scalability and semantic misalignment in multi-view learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-enhanced multi-view fusion framework
Modular architecture with frozen pre-trained models
Low-latency parameter-efficient anomaly detection
🔎 Similar Papers
No similar papers found.
Chengyu Song
Chengyu Song
UC Riverside
SecurityOperating SystemProgram LanguageTrustworthy ML
J
Jianming Zheng
Laboratory for Advanced Computing and Intelligent Engineering, Wuxi, China