LanG -- A Governance-Aware Agentic AI Platform for Unified Security Operations

📅 2026-04-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses critical challenges in Security Operations Centers (SOCs)—including alert fatigue, tool fragmentation, and insufficient cross-source event correlation—by proposing the first open-source platform that integrates AI governance with security agents. Built on a layered architecture, the platform combines LangGraph-based agent orchestration, LLM fine-tuning for rule generation, Louvain community detection, and Bayesian scoring, augmented by a human-in-the-loop feedback mechanism and a dual-layer guardrail system (regex-based filtering and Llama Prompt Guard 2). Experimental results demonstrate strong performance in event correlation (F1=87%), attack chain reconstruction (accuracy=87.5%), threat detection (F1=91.0%), and governance compliance (guardrail F1=98.1% with zero false positives). The system achieves a 96.2% rule deployment acceptance rate and a mean time to detection (MTTD) of only 1.58 seconds.
📝 Abstract
Modern Security Operations Centers struggle with alert fatigue, fragmented tooling, and limited cross-source event correlation. Challenges that current Security Information Event Management and Extended Detection and Response systems only partially address through fragmented tools. This paper presents the LLM-assisted network Governance (LanG), an open-source, governance-aware agentic AI platform for unified security operations contributing: (i) a Unified Incident Context Record with a correlation engine (F1 = 87%), (ii) an Agentic AI Orchestrator on LangGraph with human-in-the-loop checkpoints, (iii) an LLM-based Rule Generator finetuned on four base models producing deployable Snort 2/3, Suricata, and YARA rules (average acceptance rate 96.2%), (iv) a Three-Phase Attack Reconstructor combining Louvain community detection, LLM-driven hypothesis generation, and Bayesian scoring (87.5% kill-chain accuracy), and (v) a layered Governance-MCP-Agentic AI-Security architecture where all tools are exposed via the Model Context Protocol, governed by an AI Governance Policy Engine with a two-layer guardrail pipeline (regex + Llama Prompt Guard 2 semantic classifier, achieving 98.1% F1 score with experimental zero false positives). Designed for Managed Security Service Providers, the platform supports multi-tenant isolation, role-based access, and fully local deployment. Finetuned anomaly and threat detectors achieve weighted F1 scores of 99.0% and 91.0%, respectively, in intrusion-detection benchmarks, running inferences in $\approx$21 ms with a machine-side mean time to detect of 1.58 s, and the rule generator exceeds 91% deployability on live IDS engines. A systematic comparison against eight SOC platforms confirms that LanG uniquely satisfies multiple industrial capabilities all in one open-source tool, while enforcing selected AI governance policies.
Problem

Research questions and friction points this paper is trying to address.

alert fatigue
fragmented tooling
cross-source event correlation
security operations
AI governance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Agentic AI
AI Governance
Unified Security Operations
LLM-based Rule Generation
Model Context Protocol
🔎 Similar Papers
A
Anes Abdennebi
Department of Software Engineering and IT, École de Technologie Supérieure (ÉTS), Montreal, Canada
N
Nadjia Kara
Department of Software Engineering and IT, École de Technologie Supérieure (ÉTS), Montreal, Canada
L
Laaziz Lahlou
Department of Software Engineering and IT, École de Technologie Supérieure (ÉTS), Montreal, Canada
Hakima Ould-Slimane
Hakima Ould-Slimane
Professor, Mathematics and Computer Science Departement, UQTR
CybersecurityData PrivacyIntrusion DetectionFederated Learning