HyperGraphX: Graph Transductive Learning with Hyperdimensional Computing and Message Passing

📅 2025-10-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the accuracy-efficiency trade-offs of existing graph neural networks (GNNs) and hyperdimensional computing (HDC) methods in graph transduction. We propose the first framework that deeply integrates HDC binding/aggregation operations with GNN message passing, supporting both homogeneous and heterogeneous graphs. Our approach employs binary hypervectors for node representation and couples them with graph convolution for efficient, scalable information propagation. Evaluated on multiple benchmark datasets, it achieves superior classification accuracy while drastically reducing computational cost: training/inference is accelerated by up to 9,561× over GCNII and 144.5× over HDGL on GPU hardware. The framework delivers high accuracy, ultra-low computational overhead, and exceptional energy efficiency—naturally aligning with brain-inspired and in-memory computing architectures. It establishes a novel paradigm for lightweight, hardware-aware graph learning.

Technology Category

Application Category

📝 Abstract
We present a novel algorithm, hdgc, that marries graph convolution with binding and bundling operations in hyperdimensional computing for transductive graph learning. For prediction accuracy hdgc outperforms major and popular graph neural network implementations as well as state-of-the-art hyperdimensional computing implementations for a collection of homophilic graphs and heterophilic graphs. Compared with the most accurate learning methodologies we have tested, on the same target GPU platform, hdgc is on average 9561.0 and 144.5 times faster than gcnii, a graph neural network implementation and HDGL, a hyperdimensional computing implementation, respectively. As the majority of the learning operates on binary vectors, we expect outstanding energy performance of hdgc on neuromorphic and emerging process-in-memory devices.
Problem

Research questions and friction points this paper is trying to address.

Combines graph convolution with hyperdimensional computing for transductive learning
Outperforms existing graph neural networks and hyperdimensional computing methods
Achieves significant speed improvements while operating primarily on binary vectors
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combines graph convolution with hyperdimensional computing operations
Uses binary vectors for efficient learning on specialized hardware
Achieves high speed and accuracy on diverse graph types
🔎 Similar Papers
No similar papers found.