🤖 AI Summary
To address the degradation of graph-label relationships and model generalization under graph data distribution shifts, this paper proposes the Unified Graph Modifier (UGM), the first framework to jointly model distribution consistency and label consistency. UGM end-to-end generates both augmented graphs and invariant subgraphs via differentiable graph modification, invariant subgraph learning, consistency regularization, and supervision-preserving mechanisms—thereby enhancing dual consistency while preserving structural and semantic integrity of the input graph. Unlike conventional two-stage paradigms, UGM avoids environmental distortion and predictive disconnection. Extensive experiments on multiple real-world graph benchmarks demonstrate that UGM significantly outperforms state-of-the-art methods, achieving average accuracy gains of 3.2–5.8 percentage points. These results validate the effectiveness and generalization advantage of dual-consistency modeling for robust graph representation learning under distributional shift.
📝 Abstract
To deal with distribution shifts in graph data, various graph out-of-distribution (OOD) generalization techniques have been recently proposed. These methods often employ a two-step strategy that first creates augmented environments and subsequently identifies invariant subgraphs to improve generalizability. Nevertheless, this approach could be suboptimal from the perspective of consistency. First, the process of augmenting environments by altering the graphs while preserving labels may lead to graphs that are not realistic or meaningfully related to the origin distribution, thus lacking distribution consistency. Second, the extracted subgraphs are obtained from directly modifying graphs, and may not necessarily maintain a consistent predictive relationship with their labels, thereby impacting label consistency. In response to these challenges, we introduce an innovative approach that aims to enhance these two types of consistency for graph OOD generalization. We propose a modifier to obtain both augmented and invariant graphs in a unified manner. With the augmented graphs, we enrich the training data without compromising the integrity of label-graph relationships. The label consistency enhancement in our framework further preserves the supervision information in the invariant graph. We conduct extensive experiments on real-world datasets to demonstrate the superiority of our framework over other state-of-the-art baselines.