🤖 AI Summary
Current explainable AI (XAI) research suffers from a lack of standardized evaluation criteria, fragmented metrics, and poor cross-study comparability—undermining its credibility and practical utility. To address these challenges, this study conducts a systematic literature review following the PRISMA guidelines, synthesizing insights from 362 peer-reviewed publications. We propose VXAI, the first three-dimensional unified evaluation framework for XAI, structured along three core dimensions: explanation type, contextual adaptability, and quality requirements. Through hierarchical clustering analysis, we identify and consolidate 41 functionally equivalent metric groups. VXAI constitutes the most comprehensive, structured, and extensible standardized evaluation system for XAI to date. It enables systematic metric selection, enhances methodological comparability across studies, and provides both theoretical foundations and actionable guidance for empirical validation and future framework evolution.
📝 Abstract
Modern AI systems frequently rely on opaque black-box models, most notably Deep Neural Networks, whose performance stems from complex architectures with millions of learned parameters. While powerful, their complexity poses a major challenge to trustworthiness, particularly due to a lack of transparency. Explainable AI (XAI) addresses this issue by providing human-understandable explanations of model behavior. However, to ensure their usefulness and trustworthiness, such explanations must be rigorously evaluated. Despite the growing number of XAI methods, the field lacks standardized evaluation protocols and consensus on appropriate metrics. To address this gap, we conduct a systematic literature review following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines and introduce a unified framework for the eValuation of XAI (VXAI). We identify 362 relevant publications and aggregate their contributions into 41 functionally similar metric groups. In addition, we propose a three-dimensional categorization scheme spanning explanation type, evaluation contextuality, and explanation quality desiderata. Our framework provides the most comprehensive and structured overview of VXAI to date. It supports systematic metric selection, promotes comparability across methods, and offers a flexible foundation for future extensions.