π€ AI Summary
This work addresses class imbalance in semi-supervised node classification with graph neural networks (GNNs). For the first time, it introduces bias-variance decomposition theory into graph learning, theoretically revealing that class imbalance predominantly inflates model varianceβnot bias. Building on this insight, we propose a variance-aware, theory-driven framework: (i) estimating node-level variance via graph-structure-augmented propagation, and (ii) imposing an explicit, differentiable variance regularization term during training. Unlike existing approaches, our method avoids heuristic resampling or cost-sensitive loss design, ensuring both theoretical consistency and implementation simplicity. Extensive experiments on multiple naturally occurring and synthetically imbalanced graph benchmarks demonstrate consistent and significant improvements over state-of-the-art methods, validating its generalization robustness and effectiveness.
π Abstract
This paper introduces a new approach to address the issue of class imbalance in graph neural networks (GNNs) for learning on graph-structured data. Our approach integrates imbalanced node classification and Bias-Variance Decomposition, establishing a theoretical framework that closely relates data imbalance to model variance. We also leverage graph augmentation technique to estimate the variance, and design a regularization term to alleviate the impact of imbalance. Exhaustive tests are conducted on multiple benchmarks, including naturally imbalanced datasets and public-split class-imbalanced datasets, demonstrating that our approach outperforms state-of-the-art methods in various imbalanced scenarios. This work provides a novel theoretical perspective for addressing the problem of imbalanced node classification in GNNs.