🤖 AI Summary
This survey addresses the fragmentation of explanation mechanisms and the lack of standardized evaluation criteria in graph neural network (GNN)-based interpretable recommender systems. Methodologically, it introduces the first systematic review grounded in a unified “graph structure–interpretability” perspective: (i) a three-dimensional taxonomy covering learning paradigms, explanation generation strategies, and explanation types; (ii) a structured knowledge graph of seminal works from 2018–2024; and (iii) a four-dimensional evaluation framework assessing faithfulness, clarity, utility, and robustness. Key contributions include: filling the critical gap in systematic surveys of graph-structured explanation mechanisms; elucidating design principles for cross-model transferable explanation architectures; and providing a comprehensive roadmap—including benchmark datasets, standardized evaluation protocols, and open challenges—for future research in GNN-driven explainable recommendation.
📝 Abstract
Explainability of recommender systems has become essential to ensure users' trust and satisfaction. Various types of explainable recommender systems have been proposed including explainable graph-based recommender systems. This review paper discusses state-of-the-art approaches of these systems and categorizes them based on three aspects: learning methods, explaining methods, and explanation types. It also explores the commonly used datasets, explainability evaluation methods, and future directions of this research area. Compared with the existing review papers, this paper focuses on explainability based on graphs and covers the topics required for developing novel explainable graph-based recommender systems.