🤖 AI Summary
Graph Neural Networks (GNNs) suffer from the oversquashing problem when modeling long-range dependencies, where structural bottlenecks in graphs impede effective propagation of information from distant nodes. Existing approaches—such as graph rewiring or channel expansion—alleviate bottlenecks but compromise inductive bias or increase parameter complexity, respectively. This paper proposes an asynchronous message passing framework: leveraging node centrality, it introduces a hierarchical batching mechanism that enables non-synchronous, ordered message updates, thereby circumventing feature compression inherent in synchronous aggregation and significantly enhancing sensitivity to long-range information. Crucially, our method requires no graph topology modification or channel dimension expansion, preserving model-agnosticism and plug-and-play compatibility. Evaluated on six general-purpose graph datasets and two long-range benchmark tasks (REDDIT-BINARY and Peptides-struct), it achieves average accuracy improvements of 5% and 4%, respectively, effectively balancing expressive power and inductive bias.
📝 Abstract
Graph Neural Networks (GNNs) suffer from Oversquashing, which occurs when tasks require long-range interactions. The problem arises from the presence of bottlenecks that limit the propagation of messages among distant nodes. Recently, graph rewiring methods modify edge connectivity and are expected to perform well on long-range tasks. Yet, graph rewiring compromises the inductive bias, incurring significant information loss in solving the downstream task. Furthermore, increasing channel capacity may overcome information bottlenecks but enhance the parameter complexity of the model. To alleviate these shortcomings, we propose an efficient model-agnostic framework that asynchronously updates node features, unlike traditional synchronous message passing GNNs. Our framework creates node batches in every layer based on the node centrality values. The features of the nodes belonging to these batches will only get updated. Asynchronous message updates process information sequentially across layers, avoiding simultaneous compression into fixed-capacity channels. We also theoretically establish that our proposed framework maintains higher feature sensitivity bounds compared to standard synchronous approaches. Our framework is applied to six standard graph datasets and two long-range datasets to perform graph classification and achieves impressive performances with a $5%$ and $4%$ improvements on REDDIT-BINARY and Peptides-struct, respectively.