🤖 AI Summary
This work proposes Trainable Hyperdimensional Computing (THDC), a novel approach that overcomes the limitations of conventional Hyperdimensional Computing (HDC), which relies on high-dimensional static random vectors and consequently suffers from large memory overhead and restricted learning capacity. THDC enables end-to-end training of HDC models for the first time by replacing fixed random vectors with learnable embeddings and optimizing class representations through a single-layer binary neural network. The proposed method drastically reduces the required dimensionality—from 10,000 down to 64—while achieving accuracy on MNIST, Fashion-MNIST, and CIFAR-10 that matches or exceeds that of existing HDC approaches. This advancement significantly enhances both the representational power and memory efficiency of HDC models.
📝 Abstract
Hyperdimensional computing (HDC) offers lightweight learning for energy-constrained devices by encoding data into high-dimensional vectors. However, its reliance on ultra-high dimensionality and static, randomly initialized hypervectors limits memory efficiency and learning capacity. Therefore, we propose Trainable Hyperdimensional Computing (THDC), which enables end-to-end HDC via backpropagation. THDC replaces randomly initialized vectors with trainable embeddings and introduces a one-layer binary neural network to optimize class representations. Evaluated on MNIST, Fashion-MNIST and CIFAR-10, THDC achieves equal or better accuracy than state-of-the-art HDC, with dimensionality reduced from 10.000 to 64.