Sketch-Augmented Features Improve Learning Long-Range Dependencies in Graph Neural Networks

📅 2025-11-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Graph Neural Networks (GNNs) face three fundamental challenges: excessive compression of long-range information, over-smoothing of node representations, and limited expressive power. To address these, we propose Sketched Random Features (SRF), a randomized global feature framework characterized by uniqueness, distance sensitivity, and topology-agnosticism. SRF employs efficient random projection to generate compact feature sketches and integrates seamlessly into standard GNN architectures without architectural modification. By preserving structural distinctions across distant nodes, SRF effectively mitigates over-compression and over-smoothing, significantly enhancing modeling of long-range dependencies. Moreover, it is both general-purpose and scalable. Extensive experiments on multiple graph learning benchmarks demonstrate consistent and substantial improvements over state-of-the-art baselines. Crucially, SRF is orthogonal to existing techniques—e.g., positional encodings—and can be combined with them to yield further performance gains.

Technology Category

Application Category

📝 Abstract
Graph Neural Networks learn on graph-structured data by iteratively aggregating local neighborhood information. While this local message passing paradigm imparts a powerful inductive bias and exploits graph sparsity, it also yields three key challenges: (i) oversquashing of long-range information, (ii) oversmoothing of node representations, and (iii) limited expressive power. In this work we inject randomized global embeddings of node features, which we term extit{Sketched Random Features}, into standard GNNs, enabling them to efficiently capture long-range dependencies. The embeddings are unique, distance-sensitive, and topology-agnostic -- properties which we analytically and empirically show alleviate the aforementioned limitations when injected into GNNs. Experimental results on real-world graph learning tasks confirm that this strategy consistently improves performance over baseline GNNs, offering both a standalone solution and a complementary enhancement to existing techniques such as graph positional encodings. Our source code is available at href{https://github.com/ryienh/sketched-random-features}{https://github.com/ryienh/sketched-random-features}.
Problem

Research questions and friction points this paper is trying to address.

Addresses oversquashing of long-range information in GNNs
Mitigates oversmoothing of node representations in graph networks
Enhances expressive power for capturing long-range dependencies
Innovation

Methods, ideas, or system contributions that make the work stand out.

Injecting sketched random features into GNNs
Using unique distance-sensitive global embeddings
Alleviating oversquashing and oversmoothing in graphs
🔎 Similar Papers
No similar papers found.