🤖 AI Summary
To address the limited interpretability of black-box machine learning models in malware detection—which hinders trustworthy security decision-making—this paper proposes a systematic XAI-enhanced framework tailored for malware analysis. Methodologically, it is the first to unify and characterize XAI adaptation mechanisms across static/dynamic features and multimodal models, integrating techniques including LIME, SHAP, attention visualization, rule extraction, and concept activation vector analysis, and constructing an XAI-malware domain knowledge graph to formalize technical applicability boundaries. Key contributions include: (1) introducing an interpretability–accuracy co-design principle; (2) establishing the first XAI evaluation paradigm specifically for malware detection; (3) distilling six open research challenges; and (4) providing an industry-deployable, explanation-enhancement guideline. Experiments demonstrate that the framework maintains high detection accuracy while significantly improving decision transparency and security analysts’ response efficiency.
📝 Abstract
Machine learning (ML) has rapidly advanced in recent years, revolutionizing fields such as finance, medicine, and cybersecurity. In malware detection, ML-based approaches have demonstrated high accuracy; however, their lack of transparency poses a significant challenge. Traditional black-box models often fail to provide interpretable justifications for their predictions, limiting their adoption in security-critical environments where understanding the reasoning behind a detection is essential for threat mitigation and response. Explainable AI (XAI) addresses this gap by enhancing model interpretability while maintaining strong detection capabilities. This survey presents a comprehensive review of state-of-the-art ML techniques for malware analysis, with a specific focus on explainability methods. We examine existing XAI frameworks, their application in malware classification and detection, and the challenges associated with making malware detection models more interpretable. Additionally, we explore recent advancements and highlight open research challenges in the field of explainable malware analysis. By providing a structured overview of XAI-driven malware detection approaches, this survey serves as a valuable resource for researchers and practitioners seeking to bridge the gap between ML performance and explainability in cybersecurity.