🤖 AI Summary
This paper addresses the fundamental trade-off between expressive power and computational efficiency in graph neural networks (GNNs). To this end, it proposes the EB-1WL edge-coloring refinement test and its corresponding architecture, EB-GNN. Inspired by the Chiba–Nishizeki triangle counting algorithm, EB-GNN explicitly models triangle substructures within edge-based message passing. It is the first framework to unify first-order logic characterization and homomorphism counting analysis under an edge-coloring paradigm, and rigorously proves that EB-1WL is strictly more expressive than the 1-WL test. Theoretical analysis establishes near-linear time and memory complexity for EB-1WL. Empirically, EB-GNN significantly outperforms standard message-passing neural networks (MPNNs) across diverse graph learning tasks, matches or exceeds the performance of specialized GNNs, and maintains superior computational efficiency and scalability.
📝 Abstract
We propose EB-1WL, an edge-based color-refinement test, and a corresponding GNN architecture, EB-GNN. Our architecture is inspired by a classic triangle counting algorithm by Chiba and Nishizeki, and explicitly uses triangles during message passing. We achieve the following results: (1)~EB-1WL is significantly more expressive than 1-WL. Further, we provide a complete logical characterization of EB-1WL based on first-order logic, and matching distinguishability results based on homomorphism counting. (2)~In an important distinction from previous proposals for more expressive GNN architectures, EB-1WL and EB-GNN require near-linear time and memory on practical graph learning tasks. (3)~Empirically, we show that EB-GNN is a highly-efficient general-purpose architecture: It substantially outperforms simple MPNNs, and remains competitive with task-specialized GNNs while being significantly more computationally efficient.