🤖 AI Summary
To address depth redundancy, over-smoothing, and over-compression in GNN inference, this paper proposes the first robust and scalable dynamic early-exit paradigm. Methodologically: (1) we introduce the Symmetric-Antisymmetric GNN (SAS-GNN), which explicitly decouples smoothness-preserving and discriminative feature propagation via symmetric and antisymmetric graph filters; (2) we design a dual-granularity, confidence-driven early-exit mechanism—operating at both node- and graph-level—to enable fine-grained, adaptive termination of message passing; (3) the framework natively supports heterogeneous graphs and long-range dependency modeling. Evaluated on multiple heterogeneous and long-range benchmarks, our approach maintains full-depth accuracy while adaptively reducing effective depth by 30–65%, yielding substantial reductions in latency and FLOPs. Crucially, performance remains stable with increasing depth—outperforming existing early-exit and asynchronous GNN methods by significant margins.
📝 Abstract
Early-exit mechanisms allow deep neural networks to halt inference as soon as classification confidence is high enough, adaptively trading depth for confidence, and thereby cutting latency and energy on easy inputs while retaining full-depth accuracy for harder ones. Similarly, adding early exit mechanisms to Graph Neural Networks (GNNs), the go-to models for graph-structured data, allows for dynamic trading depth for confidence on simple graphs while maintaining full-depth accuracy on harder and more complex graphs to capture intricate relationships. Although early exits have proven effective across various deep learning domains, their potential within GNNs in scenarios that require deep architectures while resisting over-smoothing and over-squashing remains largely unexplored. We unlock that potential by first introducing Symmetric-Anti-Symmetric Graph Neural Networks (SAS-GNN), whose symmetry-based inductive biases mitigate these issues and yield stable intermediate representations that can be useful to allow early exiting in GNNs. Building on this backbone, we present Early-Exit Graph Neural Networks (EEGNNs), which append confidence-aware exit heads that allow on-the-fly termination of propagation based on each node or the entire graph. Experiments show that EEGNNs preserve robust performance as depth grows and deliver competitive accuracy on heterophilic and long-range benchmarks, matching attention-based and asynchronous message-passing models while substantially reducing computation and latency. We plan to release the code to reproduce our experiments.