Learning from Historical Activations in Graph Neural Networks

📅 2026-01-03
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses a key limitation of conventional graph neural networks (GNNs), which typically rely solely on the final-layer node representations for pooling or classification, thereby discarding valuable historical activation information from intermediate layers and suffering from issues such as oversmoothing and representational degradation. To overcome this, the authors propose HISTOGRAPH, a two-stage attention-based aggregation framework that first unifies intermediate activations across all GNN layers through inter-layer attention and then dynamically models the evolution of node representations across depths via node-level attention. By systematically leveraging historical activation signals, HISTOGRAPH effectively mitigates information loss in deep GNNs, achieving state-of-the-art performance on multiple graph classification benchmarks and demonstrating enhanced expressiveness and robustness, particularly in deep architectures.

Technology Category

Application Category

📝 Abstract
Graph Neural Networks (GNNs) have demonstrated remarkable success in various domains such as social networks, molecular chemistry, and more. A crucial component of GNNs is the pooling procedure, in which the node features calculated by the model are combined to form an informative final descriptor to be used for the downstream task. However, previous graph pooling schemes rely on the last GNN layer features as an input to the pooling or classifier layers, potentially under-utilizing important activations of previous layers produced during the forward pass of the model, which we regard as historical graph activations. This gap is particularly pronounced in cases where a node's representation can shift significantly over the course of many graph neural layers, and worsened by graph-specific challenges such as over-smoothing in deep architectures. To bridge this gap, we introduce HISTOGRAPH, a novel two-stage attention-based final aggregation layer that first applies a unified layer-wise attention over intermediate activations, followed by node-wise attention. By modeling the evolution of node representations across layers, our HISTOGRAPH leverages both the activation history of nodes and the graph structure to refine features used for final prediction. Empirical results on multiple graph classification benchmarks demonstrate that HISTOGRAPH offers strong performance that consistently improves traditional techniques, with particularly strong robustness in deep GNNs.
Problem

Research questions and friction points this paper is trying to address.

Graph Neural Networks
graph pooling
historical activations
over-smoothing
node representations
Innovation

Methods, ideas, or system contributions that make the work stand out.

historical activations
graph neural networks
attention mechanism
graph pooling
over-smoothing
🔎 Similar Papers
No similar papers found.