Early-Exit Graph Neural Networks

📅 2025-05-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address depth redundancy, over-smoothing, and over-compression in GNN inference, this paper proposes the first robust and scalable dynamic early-exit paradigm. Methodologically: (1) we introduce the Symmetric-Antisymmetric GNN (SAS-GNN), which explicitly decouples smoothness-preserving and discriminative feature propagation via symmetric and antisymmetric graph filters; (2) we design a dual-granularity, confidence-driven early-exit mechanism—operating at both node- and graph-level—to enable fine-grained, adaptive termination of message passing; (3) the framework natively supports heterogeneous graphs and long-range dependency modeling. Evaluated on multiple heterogeneous and long-range benchmarks, our approach maintains full-depth accuracy while adaptively reducing effective depth by 30–65%, yielding substantial reductions in latency and FLOPs. Crucially, performance remains stable with increasing depth—outperforming existing early-exit and asynchronous GNN methods by significant margins.

Technology Category

Application Category

📝 Abstract
Early-exit mechanisms allow deep neural networks to halt inference as soon as classification confidence is high enough, adaptively trading depth for confidence, and thereby cutting latency and energy on easy inputs while retaining full-depth accuracy for harder ones. Similarly, adding early exit mechanisms to Graph Neural Networks (GNNs), the go-to models for graph-structured data, allows for dynamic trading depth for confidence on simple graphs while maintaining full-depth accuracy on harder and more complex graphs to capture intricate relationships. Although early exits have proven effective across various deep learning domains, their potential within GNNs in scenarios that require deep architectures while resisting over-smoothing and over-squashing remains largely unexplored. We unlock that potential by first introducing Symmetric-Anti-Symmetric Graph Neural Networks (SAS-GNN), whose symmetry-based inductive biases mitigate these issues and yield stable intermediate representations that can be useful to allow early exiting in GNNs. Building on this backbone, we present Early-Exit Graph Neural Networks (EEGNNs), which append confidence-aware exit heads that allow on-the-fly termination of propagation based on each node or the entire graph. Experiments show that EEGNNs preserve robust performance as depth grows and deliver competitive accuracy on heterophilic and long-range benchmarks, matching attention-based and asynchronous message-passing models while substantially reducing computation and latency. We plan to release the code to reproduce our experiments.
Problem

Research questions and friction points this paper is trying to address.

Adaptive early-exit mechanisms for GNNs to balance depth and confidence
Addressing over-smoothing and over-squashing in deep GNN architectures
Reducing computation and latency while maintaining competitive accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Early-exit mechanisms for GNNs
Symmetric-Anti-Symmetric GNN backbone
Confidence-aware exit heads for dynamic termination
🔎 Similar Papers
No similar papers found.
A
Andrea Giuseppe Di Francesco
Institute of Information Science and Technologies "Alessandro Faedo" - ISTI-CNR, Pisa, Italy; Department of Computer Science, Control and Management Engineering, Sapienza University of Rome, Rome, Italy
Maria Sofia Bucarelli
Maria Sofia Bucarelli
Research Fellow, Sapienza University of Rome
Machine LearningData Science
F
F. M. Nardini
Institute of Information Science and Technologies "Alessandro Faedo" - ISTI-CNR, Pisa, Italy
Raffaele Perego
Raffaele Perego
Research Director, ISTI-CNR
Information retrievalmachine learninghigh performance computing
Nicola Tonellotto
Nicola Tonellotto
Associate Professor, University of Pisa
Information RetrievalCloud ComputingMachine Learning
Fabrizio Silvestri
Fabrizio Silvestri
Sapienza, University of Rome
Machine LearningArtificial IntelligenceNatural Language ProcessingRAGWeb