Breaking Symmetry Bottlenecks in GNN Readouts

📅 2026-02-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses a fundamental limitation in the expressive power of graph neural networks (GNNs) that arises not only from message passing but also from the use of linear permutation-invariant readout operations—such as summation or averaging—which discard symmetry-aware information. Drawing on finite-dimensional representation theory, the authors show that such readouts project node embeddings onto the invariant subspace of the permutation group, collapsing nontrivial symmetric structures. To overcome this, they propose a novel readout architecture based on projection decomposition and nonlinear invariant statistics, which preserves symmetry-channel information lost by conventional methods. Replacing only the readout module enables a fixed encoder to distinguish Weisfeiler–Lehman indistinguishable graph pairs and yields significant performance gains across multiple graph learning benchmarks, underscoring the critical role of readout design in GNN expressivity.

Technology Category

Application Category

📝 Abstract
Graph neural networks (GNNs) are widely used for learning on structured data, yet their ability to distinguish non-isomorphic graphs is fundamentally limited. These limitations are usually attributed to message passing; in this work we show that an independent bottleneck arises at the readout stage. Using finite-dimensional representation theory, we prove that all linear permutation-invariant readouts, including sum and mean pooling, factor through the Reynolds (group-averaging) operator and therefore project node embeddings onto the fixed subspace of the permutation action, erasing all non-trivial symmetry-aware components regardless of encoder expressivity. This yields both a new expressivity barrier and an interpretable characterization of what global pooling preserves or destroys. To overcome this collapse, we introduce projector-based invariant readouts that decompose node representations into symmetry-aware channels and summarize them with nonlinear invariant statistics, preserving permutation invariance while retaining information provably invisible to averaging. Empirically, swapping only the readout enables fixed encoders to separate WL-hard graph pairs and improves performance across multiple benchmarks, demonstrating that readout design is a decisive and under-appreciated factor in GNN expressivity.
Problem

Research questions and friction points this paper is trying to address.

graph neural networks
readout
permutation invariance
symmetry
expressivity
Innovation

Methods, ideas, or system contributions that make the work stand out.

graph neural networks
readout
symmetry
permutation invariance
representation theory
🔎 Similar Papers
No similar papers found.
M
Mouad Talhi
Department of Computing, Imperial College London, UK
A
Arne Wolf
Department of Mathematics, Imperial College London, UK; London School of Geometry and Number Theory, UK
Anthea Monod
Anthea Monod
Associate Professor, Department of Mathematics, Imperial College London
Applied Algebraic GeometryTopological Data AnalysisMathematical Biology