🤖 AI Summary
This work addresses the challenge of limited labeled data in few-shot graph learning, which often undermines model robustness and interpretability. The authors propose the first "explanation-in-the-loop" training framework that integrates interpretable subgraph extraction directly into the learning process. Specifically, label propagation via belief propagation enhances supervision signals, while an auxiliary graph neural network leverages gradient-based backpropagation to dynamically identify discriminative subgraphs around target nodes, guiding predictions and suppressing irrelevant information. Evaluated on seven benchmark datasets, the method significantly outperforms existing approaches, achieving state-of-the-art performance in prediction accuracy, training efficiency, and explanation quality. Notably, it is the first to simultaneously enhance both interpretability and generalization capability in few-shot graph learning.
📝 Abstract
The challenges of training and inference in few-shot environments persist in the area of graph representation learning. The quality and quantity of labels are often insufficient due to the extensive expert knowledge required to annotate graph data. In this context, Few-Shot Graph Learning (FSGL) approaches have been developed over the years. Through sophisticated neural architectures and customized training pipelines, these approaches enhance model adaptability to new label distributions. However, compromises in \textcolor{black}{the model's} robustness and interpretability can result in overfitting to noise in labeled data and degraded performance. This paper introduces the first explanation-in-the-loop framework for the FSGL problem, called BAED. We novelly employ the belief propagation algorithm to facilitate label augmentation on graphs. Then, leveraging an auxiliary graph neural network and the gradient backpropagation method, our framework effectively extracts explanatory subgraphs surrounding target nodes. The final predictions are based on these informative subgraphs while mitigating the influence of redundant information from neighboring nodes. Extensive experiments on seven benchmark datasets demonstrate superior prediction accuracy, training efficiency, and explanation quality of BAED. As a pioneer, this work highlights the potential of the explanation-based research paradigm in FSGL.