🤖 AI Summary
Existing graph few-shot learning methods face two key challenges: (1) reliance on predefined global graph filters, limiting adaptability to local topological heterogeneity; and (2) sparse support-set samples, causing distributional shift between support and query sets and impairing generalization. To address these, we propose AdaptSpectra—a meta-learning framework integrating adaptive spectral experts with cross-set distribution calibration. Its core innovations are: (i) learnable local spectral experts that dynamically generate node-neighborhood-specific graph filters; and (ii) a cross-set feature distribution calibration module that explicitly aligns latent-space distributions of support and query sets. Evaluated on multiple few-shot graph classification benchmarks, AdaptSpectra consistently outperforms state-of-the-art methods, demonstrating superior capability in modeling local structural patterns and robustness to distributional shifts.
📝 Abstract
Graph few-shot learning has attracted increasing attention due to its ability to rapidly adapt models to new tasks with only limited labeled nodes. Despite the remarkable progress made by existing graph few-shot learning methods, several key limitations remain. First, most current approaches rely on predefined and unified graph filters (e.g., low-pass or high-pass filters) to globally enhance or suppress node frequency signals. Such fixed spectral operations fail to account for the heterogeneity of local topological structures inherent in real-world graphs. Moreover, these methods often assume that the support and query sets are drawn from the same distribution. However, under few-shot conditions, the limited labeled data in the support set may not sufficiently capture the complex distribution of the query set, leading to suboptimal generalization. To address these challenges, we propose GRACE, a novel Graph few-shot leaRning framework that integrates Adaptive spectrum experts with Cross-sEt distribution calibration techniques. Theoretically, the proposed approach enhances model generalization by adapting to both local structural variations and cross-set distribution calibration. Empirically, GRACE consistently outperforms state-of-the-art baselines across a wide range of experimental settings. Our code can be found here.