🤖 AI Summary
To address overfitting in graph node classification when transitioning from homophilous to heterophilous graphs, this paper proposes Posterior Label Smoothing (PLS). Under a semi-supervised setting, PLS employs Bayesian posterior inference to jointly model neighborhood label likelihoods and global graph-level statistical priors, dynamically generating adaptive soft labels for structure-aware label regularization. Moreover, it is the first work to theoretically and empirically reveal that iterative pseudo-label refinement enhances global statistical consistency. Extensive experiments across 10 benchmark datasets and 8 backbone models demonstrate significant accuracy improvements. Empirical analysis confirms that PLS effectively mitigates overfitting and strengthens generalization. By integrating structural information and global statistics into label smoothing, PLS establishes a novel, interpretable, and robust paradigm for heterophilous graph learning.
📝 Abstract
Label smoothing is a widely studied regularization technique in machine learning. However, its potential for node classification in graph-structured data, spanning homophilic to heterophilic graphs, remains largely unexplored. We introduce posterior label smoothing, a novel method for transductive node classification that derives soft labels from a posterior distribution conditioned on neighborhood labels. The likelihood and prior distributions are estimated from the global statistics of the graph structure, allowing our approach to adapt naturally to various graph properties. We evaluate our method on 10 benchmark datasets using eight baseline models, demonstrating consistent improvements in classification accuracy. The following analysis demonstrates that soft labels mitigate overfitting during training, leading to better generalization performance, and that pseudo-labeling effectively refines the global label statistics of the graph. Our code is available at https://github.com/ml-postech/PosteL.