🤖 AI Summary
This work addresses the challenges of semi-supervised learning on real-world graphs, where structural unreliability and heterophily hinder performance. To this end, the authors propose a principled Bayesian framework that explicitly captures structural uncertainty by modeling the posterior distribution over signed adjacency matrices. The approach introduces a sparse signed message-passing mechanism to enable robust neighborhood aggregation. Notably, it is the first method to integrate Bayesian posterior marginalization with sparse signed message aggregation. Extensive experiments demonstrate that the proposed model significantly outperforms strong baselines on both synthetic and real-world heterophilous graph benchmarks containing structural noise, thereby establishing a new paradigm for graph neural networks in handling structural uncertainty.
📝 Abstract
Semi-supervised learning on real-world graphs is frequently challenged by heterophily, where the observed graph is unreliable or label-disassortative. Many existing graph neural networks either rely on a fixed adjacency structure or attempt to handle structural noise through regularization. In this work, we explicitly capture structural uncertainty by modeling a posterior distribution over signed adjacency matrices, allowing each edge to be positive, negative, or absent. We propose a sparse signed message passing network that is naturally robust to edge noise and heterophily, which can be interpreted from a Bayesian perspective. By combining (i) posterior marginalization over signed graph structures with (ii) sparse signed message aggregation, our approach offers a principled way to handle both edge noise and heterophily. Experimental results demonstrate that our method outperforms strong baseline models on heterophilic benchmarks under both synthetic and real-world structural noise.