🤖 AI Summary
Existing theoretical frameworks for Graph Neural Networks (GNNs) suffer from two key limitations: (i) an excessive focus on expressivity beyond the Weisfeiler–Leman (WL) test—despite most real-world tasks not requiring such discriminative power; and (ii) binary, idealized modeling assumptions that fail to capture practical performance bottlenecks such as over-compression. Method: We propose Message Passing Complexity (MPC), the first continuous, computable metric quantifying task difficulty for GNNs, grounded in the intrinsic message-passing mechanism and empirically validated across diverse benchmarks. Contribution/Results: MPC transcends traditional binary expressivity characterizations and strong distributional assumptions, enabling fine-grained, architecture-agnostic prediction of relative model performance on standard graph learning tasks. Extensive experiments demonstrate a strong correlation between MPC values and empirical model accuracy, establishing a principled bridge between theoretical analysis and practical GNN design.
📝 Abstract
Expressivity theory, characterizing which graphs a GNN can distinguish, has become the predominant framework for analyzing GNNs, with new models striving for higher expressivity. However, we argue that this focus is misguided: First, higher expressivity is not necessary for most real-world tasks as these tasks rarely require expressivity beyond the basic WL test. Second, expressivity theory's binary characterization and idealized assumptions fail to reflect GNNs' practical capabilities. To overcome these limitations, we propose Message Passing Complexity (MPC): a continuous measure that quantifies the difficulty for a GNN architecture to solve a given task through message passing. MPC captures practical limitations like over-squashing while preserving the theoretical impossibility results from expressivity theory, effectively narrowing the gap between theory and practice. Through extensive validation on fundamental GNN tasks, we show that MPC's theoretical predictions correlate with empirical performance, successfully explaining architectural successes and failures. Thereby, MPC advances beyond expressivity theory to provide a more powerful and nuanced framework for understanding and improving GNN architectures.