Survey Perspective: The Role of Explainable AI in Threat Intelligence

📅 2025-03-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Security Operations Center (SOC) analysts commonly face AI-driven security alert overload, high false-positive rates, and insufficient model interpretability—leading to inefficient response and low trust in AI outputs. This study employs a large-scale, cross-industry survey and in-depth qualitative interviews to empirically identify analysts’ critical explanation needs: attack attribution, confidence quantification, and feature contribution interpretation—the first such evidence-based characterization. We propose a human factors–driven XAI design framework for security alerting systems, distilling three foundational explanation requirements: causality, actionability, and comparability. Based on these, we derive six operationally grounded XAI integration principles. Our findings reveal that current XAI tool adoption remains below 15%, yet 92% of analysts prioritize deploying systems aligned with our principles—yielding measurable improvements in alert comprehension depth, prioritization accuracy, and human-AI collaborative trust.

Technology Category

Application Category

📝 Abstract
The increasing reliance on AI-based security tools in Security Operations Centers (SOCs) has transformed threat detection and response, yet analysts frequently struggle with alert overload, false positives, and lack of contextual relevance. The inability to effectively analyze AI-generated security alerts lead to inefficiencies in incident response and reduces trust in automated decision-making. In this paper, we show results and analysis of our investigation of how SOC analysts navigate AI-based alerts, their challenges with current security tools, and how explainability (XAI) integrated into their security workflows has the potential to become an effective decision support. In this vein, we conducted an industry survey. Using the survey responses, we analyze how security analysts' process, retrieve, and prioritize alerts. Our findings indicate that most analysts have not yet adopted XAI-integrated tools, but they express high interest in attack attribution, confidence scores, and feature contribution explanations to improve interpretability, and triage efficiency. Based on our findings, we also propose practical design recommendations for XAI-enhanced security alert systems, enabling AI-based cybersecurity solutions to be more transparent, interpretable, and actionable.
Problem

Research questions and friction points this paper is trying to address.

Addresses alert overload and false positives in AI-based security tools.
Explores how explainable AI (XAI) improves decision-making in threat intelligence.
Proposes design recommendations for more transparent and actionable AI cybersecurity systems.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates Explainable AI into security workflows
Proposes XAI-enhanced alert systems for transparency
Uses surveys to analyze alert processing efficiency
🔎 Similar Papers
No similar papers found.