VS-Graph: Scalable and Efficient Graph Classification Using Hyperdimensional Computing

📅 2025-12-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high computational cost and poor deployability of Graph Neural Networks (GNNs) on resource-constrained devices for graph classification, this paper proposes an efficient graph learning framework based on Hyperdimensional Computing (HDC). The method introduces a topology-driven node identification mechanism and associative message passing, integrating spike diffusion with full high-dimensional multi-hop neighborhood aggregation—entirely eliminating backpropagation. It achieves high-performance representation learning using only 128-dimensional hypervectors. On benchmark datasets including MUTAG and DD, the approach improves accuracy by 4–5% over state-of-the-art HDC methods while accelerating training by up to 450× compared to mainstream GNNs. This work significantly narrows the gap between HDC and deep learning in both expressive power and practical utility.

Technology Category

Application Category

📝 Abstract
Graph classification is a fundamental task in domains ranging from molecular property prediction to materials design. While graph neural networks (GNNs) achieve strong performance by learning expressive representations via message passing, they incur high computational costs, limiting their scalability and deployment on resource-constrained devices. Hyperdimensional Computing (HDC), also known as Vector Symbolic Architectures (VSA), offers a lightweight, brain-inspired alternative, yet existing HDC-based graph methods typically struggle to match the predictive performance of GNNs. In this work, we propose VS-Graph, a vector-symbolic graph learning framework that narrows the gap between the efficiency of HDC and the expressive power of message passing. VS-Graph introduces a Spike Diffusion mechanism for topology-driven node identification and an Associative Message Passing scheme for multi-hop neighborhood aggregation entirely within the high-dimensional vector space. Without gradient-based optimization or backpropagation, our method achieves competitive accuracy with modern GNNs, outperforming the prior HDC baseline by 4-5% on standard benchmarks such as MUTAG and DD. It also matches or exceeds the performance of the GNN baselines on several datasets while accelerating the training by a factor of up to 450x. Furthermore, VS-Graph maintains high accuracy even with the hypervector dimensionality reduced to D=128, demonstrating robustness under aggressive dimension compression and paving the way for ultra-efficient execution on edge and neuromorphic hardware.
Problem

Research questions and friction points this paper is trying to address.

Improving graph classification efficiency while maintaining competitive accuracy.
Bridging the performance gap between lightweight HDC and powerful GNNs.
Enabling scalable graph learning on resource-constrained edge devices.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Spike Diffusion for node identification
Associative Message Passing for neighborhood aggregation
Hyperdimensional computing without gradient optimization
🔎 Similar Papers
No similar papers found.