THDC: Training Hyperdimensional Computing Models with Backpropagation

📅 2026-01-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes Trainable Hyperdimensional Computing (THDC), a novel approach that overcomes the limitations of conventional Hyperdimensional Computing (HDC), which relies on high-dimensional static random vectors and consequently suffers from large memory overhead and restricted learning capacity. THDC enables end-to-end training of HDC models for the first time by replacing fixed random vectors with learnable embeddings and optimizing class representations through a single-layer binary neural network. The proposed method drastically reduces the required dimensionality—from 10,000 down to 64—while achieving accuracy on MNIST, Fashion-MNIST, and CIFAR-10 that matches or exceeds that of existing HDC approaches. This advancement significantly enhances both the representational power and memory efficiency of HDC models.

Technology Category

Application Category

📝 Abstract
Hyperdimensional computing (HDC) offers lightweight learning for energy-constrained devices by encoding data into high-dimensional vectors. However, its reliance on ultra-high dimensionality and static, randomly initialized hypervectors limits memory efficiency and learning capacity. Therefore, we propose Trainable Hyperdimensional Computing (THDC), which enables end-to-end HDC via backpropagation. THDC replaces randomly initialized vectors with trainable embeddings and introduces a one-layer binary neural network to optimize class representations. Evaluated on MNIST, Fashion-MNIST and CIFAR-10, THDC achieves equal or better accuracy than state-of-the-art HDC, with dimensionality reduced from 10.000 to 64.
Problem

Research questions and friction points this paper is trying to address.

Hyperdimensional Computing
memory efficiency
learning capacity
ultra-high dimensionality
randomly initialized hypervectors
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hyperdimensional Computing
Backpropagation
Trainable Embeddings
Binary Neural Network
Dimensionality Reduction
🔎 Similar Papers
No similar papers found.