A Logical View of GNN-Style Computation and the Role of Activation Functions

📅 2025-12-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work systematically investigates the expressive power of graph neural networks (GNNs) on numerical and Boolean queries, focusing on the interplay between linear message passing and nonlinear activation functions. We introduce MPLang, a logical language that uniformly models linear aggregation alongside diverse activations—including ReLU, truncated ReLU, and terminal constant activations. Our key contributions are threefold: (1) We establish the first rigorous separation result showing that, when linear layers are incorporated, ReLU and bounded terminal constant activations exhibit strictly different expressive capabilities; (2) We fully characterize the expressive boundary of walk-summed features under terminal constant activations; and (3) Our framework unifies major GNN logical formalisms, revealing that nonlinear activations fundamentally enhance expressivity for numerical queries. These results provide a new theoretical benchmark for understanding GNN expressiveness, bridging logical, algebraic, and architectural perspectives.

Technology Category

Application Category

📝 Abstract
We study the numerical and Boolean expressiveness of MPLang, a declarative language that captures the computation of graph neural networks (GNNs) through linear message passing and activation functions. We begin with A-MPLang, the fragment without activation functions, and give a characterization of its expressive power in terms of walk-summed features. For bounded activation functions, we show that (under mild conditions) all eventually constant activations yield the same expressive power - numerical and Boolean - and that it subsumes previously established logics for GNNs with eventually constant activation functions but without linear layers. Finally, we prove the first expressive separation between unbounded and bounded activations in the presence of linear layers: MPLang with ReLU is strictly more powerful for numerical queries than MPLang with eventually constant activation functions, e.g., truncated ReLU. This hinges on subtle interactions between linear aggregation and eventually constant non-linearities, and it establishes that GNNs using ReLU are more expressive than those restricted to eventually constant activations and linear layers.
Problem

Research questions and friction points this paper is trying to address.

Characterizes expressive power of GNNs without activation functions
Compares bounded activation functions' impact on GNN expressiveness
Proves ReLU enhances GNN expressiveness over bounded activations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Declarative language captures GNN computation via linear message passing.
Characterizes expressive power using walk-summed features without activations.
Proves ReLU's superior expressiveness over bounded activations with linear layers.
🔎 Similar Papers
No similar papers found.