🤖 AI Summary
To address the lack of interpretability and transparency in AI model decision-making, this paper proposes ShapG: the first method integrating Graph Neural Networks (GNNs) into the Shapley value computation framework. ShapG explicitly constructs a feature interaction graph to model higher-order feature dependencies, thereby relaxing the conventional assumption of feature independence. By synergizing GNN-based representation learning with Monte Carlo–based Shapley value approximation, ShapG enables efficient and scalable quantification of global feature importance. Extensive experiments on multiple benchmark datasets demonstrate that ShapG significantly improves the accuracy of feature importance ranking. Moreover, it achieves a 3.2× speedup over KernelSHAP while exhibiting linear time complexity—making it suitable for large-scale, high-dimensional feature settings.