🤖 AI Summary
Current SOC frameworks overemphasize automation while neglecting systematic human-AI collaboration—particularly human oversight, dynamic trust calibration, and scalable AI autonomy—and predominantly adopt static, binary autonomy settings, rendering them ill-suited to heterogeneous task complexity and risk profiles. To address this, we propose the first unified human-AI collaborative framework for SOCs, featuring a novel five-level AI autonomy model that formally characterizes the mapping among autonomy levels, trust thresholds, and human-in-the-loop (HITL) roles. The framework integrates a fine-tuned LLM-driven cybersecurity AI-Avatar, a trust-aware HITL mechanism, a hierarchical autonomous policy engine, and a simulation-based cyber range for empirical validation. Experiments demonstrate significant mitigation of alert fatigue, improved response coordination efficiency, and high-fidelity, interpretable AI-augmented decision-making—all while preserving human authority and accountability.
📝 Abstract
This article presents a structured framework for Human-AI collaboration in Security Operations Centers (SOCs), integrating AI autonomy, trust calibration, and Human-in-the-loop decision making. Existing frameworks in SOCs often focus narrowly on automation, lacking systematic structures to manage human oversight, trust calibration, and scalable autonomy with AI. Many assume static or binary autonomy settings, failing to account for the varied complexity, criticality, and risk across SOC tasks considering Humans and AI collaboration. To address these limitations, we propose a novel autonomy tiered framework grounded in five levels of AI autonomy from manual to fully autonomous, mapped to Human-in-the-Loop (HITL) roles and task-specific trust thresholds. This enables adaptive and explainable AI integration across core SOC functions, including monitoring, protection, threat detection, alert triage, and incident response. The proposed framework differentiates itself from previous research by creating formal connections between autonomy, trust, and HITL across various SOC levels, which allows for adaptive task distribution according to operational complexity and associated risks. The framework is exemplified through a simulated cyber range that features the cybersecurity AI-Avatar, a fine-tuned LLM-based SOC assistant. The AI-Avatar case study illustrates human-AI collaboration for SOC tasks, reducing alert fatigue, enhancing response coordination, and strategically calibrating trust. This research systematically presents both the theoretical and practical aspects and feasibility of designing next-generation cognitive SOCs that leverage AI not to replace but to enhance human decision-making.