🤖 AI Summary
To address the scarcity of meta-tasks and the limited practicality of existing meta-learning approaches—which rely heavily on large, heterogeneous task collections—in graph few-shot learning, this paper proposes SMILE, a novel framework. SMILE introduces a dual-level MixUp mechanism: intra-task node-level MixUp and inter-task meta-level MixUp, substantially enhancing meta-training data diversity. It is the first to explicitly incorporate node-degree priors into graph representation learning, improving both discriminability and robustness. Theoretical analysis establishes a tighter generalization error bound for SMILE. Extensive experiments under both in-domain and cross-domain few-shot settings demonstrate that SMILE consistently outperforms state-of-the-art methods, significantly reducing dependence on large-scale meta-task sets. As a result, SMILE provides a scalable, highly generalizable solution for low-resource graph learning.
📝 Abstract
Graph neural networks have been demonstrated as a powerful paradigm for effectively learning graph-structured data on the web and mining content from it.Current leading graph models require a large number of labeled samples for training, which unavoidably leads to overfitting in few-shot scenarios. Recent research has sought to alleviate this issue by simultaneously leveraging graph learning and meta-learning paradigms. However, these graph meta-learning models assume the availability of numerous meta-training tasks to learn transferable meta-knowledge. Such assumption may not be feasible in the real world due to the difficulty of constructing tasks and the substantial costs involved. Therefore, we propose a SiMple yet effectIve approach for graph few-shot Learning with fEwer tasks, named SMILE. We introduce a dual-level mixup strategy, encompassing both within-task and across-task mixup, to simultaneously enrich the available nodes and tasks in meta-learning. Moreover, we explicitly leverage the prior information provided by the node degrees in the graph to encode expressive node representations. Theoretically, we demonstrate that SMILE can enhance the model generalization ability. Empirically, SMILE consistently outperforms other competitive models by a large margin across all evaluated datasets with in-domain and cross-domain settings. Our anonymous code can be found here.