🤖 AI Summary
Security Operations Center (SOC) analysts commonly face AI-driven security alert overload, high false-positive rates, and insufficient model interpretability—leading to inefficient response and low trust in AI outputs. This study employs a large-scale, cross-industry survey and in-depth qualitative interviews to empirically identify analysts’ critical explanation needs: attack attribution, confidence quantification, and feature contribution interpretation—the first such evidence-based characterization. We propose a human factors–driven XAI design framework for security alerting systems, distilling three foundational explanation requirements: causality, actionability, and comparability. Based on these, we derive six operationally grounded XAI integration principles. Our findings reveal that current XAI tool adoption remains below 15%, yet 92% of analysts prioritize deploying systems aligned with our principles—yielding measurable improvements in alert comprehension depth, prioritization accuracy, and human-AI collaborative trust.
📝 Abstract
The increasing reliance on AI-based security tools in Security Operations Centers (SOCs) has transformed threat detection and response, yet analysts frequently struggle with alert overload, false positives, and lack of contextual relevance. The inability to effectively analyze AI-generated security alerts lead to inefficiencies in incident response and reduces trust in automated decision-making. In this paper, we show results and analysis of our investigation of how SOC analysts navigate AI-based alerts, their challenges with current security tools, and how explainability (XAI) integrated into their security workflows has the potential to become an effective decision support. In this vein, we conducted an industry survey. Using the survey responses, we analyze how security analysts' process, retrieve, and prioritize alerts. Our findings indicate that most analysts have not yet adopted XAI-integrated tools, but they express high interest in attack attribution, confidence scores, and feature contribution explanations to improve interpretability, and triage efficiency. Based on our findings, we also propose practical design recommendations for XAI-enhanced security alert systems, enabling AI-based cybersecurity solutions to be more transparent, interpretable, and actionable.