π€ AI Summary
In real-world scenarios, graph neural networks (GNNs) suffer significant performance degradation due to realistic noise in node features; existing robust methods rely on the unrealistic strong assumption that feature noise is independent of both graph structure and labels. To address this, we introduce the **Dependence-Aware Graph Noise (DANG)** settingβthe first to explicitly model the cascading dependency: node-feature noise β graph-structure perturbation β label bias. We propose **DA-GNN**, a novel GNN framework that employs variational inference to explicitly learn the underlying causal generative mechanisms among these three components. Furthermore, we construct the first benchmark dataset for DANG. Extensive experiments demonstrate that DA-GNN consistently outperforms state-of-the-art robust GNNs under both DANG and conventional noise settings, achieving substantial gains in classification accuracy and generalization stability.
π Abstract
In real-world applications, node features in graphs often contain noise from various sources, leading to significant performance degradation in GNNs. Although several methods have been developed to enhance robustness, they rely on the unrealistic assumption that noise in node features is independent of the graph structure and node labels, thereby limiting their applicability. To this end, we introduce a more realistic noise scenario, dependency-aware noise on graphs (DANG), where noise in node features create a chain of noise dependencies that propagates to the graph structure and node labels. We propose a novel robust GNN, DA-GNN, which captures the causal relationships among variables in the data generating process (DGP) of DANG using variational inference. In addition, we present new benchmark datasets that simulate DANG in real-world applications, enabling more practical research on robust GNNs. Extensive experiments demonstrate that DA-GNN consistently outperforms existing baselines across various noise scenarios, including both DANG and conventional noise models commonly considered in this field.