🤖 AI Summary
This work addresses the accuracy-efficiency trade-offs of existing graph neural networks (GNNs) and hyperdimensional computing (HDC) methods in graph transduction. We propose the first framework that deeply integrates HDC binding/aggregation operations with GNN message passing, supporting both homogeneous and heterogeneous graphs. Our approach employs binary hypervectors for node representation and couples them with graph convolution for efficient, scalable information propagation. Evaluated on multiple benchmark datasets, it achieves superior classification accuracy while drastically reducing computational cost: training/inference is accelerated by up to 9,561× over GCNII and 144.5× over HDGL on GPU hardware. The framework delivers high accuracy, ultra-low computational overhead, and exceptional energy efficiency—naturally aligning with brain-inspired and in-memory computing architectures. It establishes a novel paradigm for lightweight, hardware-aware graph learning.
📝 Abstract
We present a novel algorithm, hdgc, that marries graph convolution with binding and bundling operations in hyperdimensional computing for transductive graph learning. For prediction accuracy hdgc outperforms major and popular graph neural network implementations as well as state-of-the-art hyperdimensional computing implementations for a collection of homophilic graphs and heterophilic graphs. Compared with the most accurate learning methodologies we have tested, on the same target GPU platform, hdgc is on average 9561.0 and 144.5 times faster than gcnii, a graph neural network implementation and HDGL, a hyperdimensional computing implementation, respectively. As the majority of the learning operates on binary vectors, we expect outstanding energy performance of hdgc on neuromorphic and emerging process-in-memory devices.