🤖 AI Summary
Message Passing Neural Networks (MPNNs) suffer from limited effective receptive fields (ERFs) due to the locality of message passing, hindering long-range dependency modeling and causing “over-squashing.” This work provides the first systematic theoretical analysis of this limitation. We propose IM-MPNN, a novel architecture that constructs hierarchical, multi-scale graph structures via graph coarsening and employs an interleaved multi-scale message passing mechanism—extending ERF without significantly increasing depth or parameter count. Crucially, we introduce a theory-driven ERF analytical framework to guide efficient long-range interaction modeling. On long-range graph benchmarks—including LRGB—IM-MPNN achieves an average accuracy gain of 8.2% over baseline MPNNs, while maintaining faster inference than deeper MPNN variants. These results demonstrate IM-MPNN’s synergistic advantages in modeling capacity, computational efficiency, and scalability.
📝 Abstract
Message-Passing Neural Networks (MPNNs) have become a cornerstone for processing and analyzing graph-structured data. However, their effectiveness is often hindered by phenomena such as over-squashing, where long-range dependencies or interactions are inadequately captured and expressed in the MPNN output. This limitation mirrors the challenges of the Effective Receptive Field (ERF) in Convolutional Neural Networks (CNNs), where the theoretical receptive field is underutilized in practice. In this work, we show and theoretically explain the limited ERF problem in MPNNs. Furthermore, inspired by recent advances in ERF augmentation for CNNs, we propose an Interleaved Multiscale Message-Passing Neural Networks (IM-MPNN) architecture to address these problems in MPNNs. Our method incorporates a hierarchical coarsening of the graph, enabling message-passing across multiscale representations and facilitating long-range interactions without excessive depth or parameterization. Through extensive evaluations on benchmarks such as the Long-Range Graph Benchmark (LRGB), we demonstrate substantial improvements over baseline MPNNs in capturing long-range dependencies while maintaining computational efficiency.