๐ค AI Summary
To address the high memory overhead (O(CD)) and the difficulty in balancing robustness and scalability inherent in โone-prototype-per-classโ designs in Hyperdimensional Computing (HDC), this paper proposes LogHDโthe first logarithmic-class-axis compression framework for HDC classification. Its core innovations are a capacity-aware codebook and an activation-contour decoding mechanism, which jointly compress the class dimension to logarithmic scale while preserving high-dimensional representational fidelity. LogHD integrates bundle-based hypervector construction, k-ary encoding, and feature-axis sparsification to enable hardware-efficient bit-level operations. Experiments demonstrate that LogHD achieves 2.5โ3.0ร higher bit-flip resilience under identical memory budgets. In ASIC implementation, it delivers 498ร energy efficiency and 62.6ร speedup over CPU/GPU baselines, significantly outperforming existing HDC hardware approaches.
๐ Abstract
Hyperdimensional computing (HDC) suits memory, energy, and reliability-constrained systems, yet the standard"one prototype per class"design requires $O(CD)$ memory (with $C$ classes and dimensionality $D$). Prior compaction reduces $D$ (feature axis), improving storage/compute but weakening robustness. We introduce LogHD, a logarithmic class-axis reduction that replaces the $C$ per-class prototypes with $n!approx!lceillog_k C
ceil$ bundle hypervectors (alphabet size $k$) and decodes in an $n$-dimensional activation space, cutting memory to $O(Dlog_k C)$ while preserving $D$. LogHD uses a capacity-aware codebook and profile-based decoding, and composes with feature-axis sparsification. Across datasets and injected bit flips, LogHD attains competitive accuracy with smaller models and higher resilience at matched memory. Under equal memory, it sustains target accuracy at roughly $2.5$-$3.0 imes$ higher bit-flip rates than feature-axis compression; an ASIC instantiation delivers $498 imes$ energy efficiency and $62.6 imes$ speedup over an AMD Ryzen 9 9950X and $24.3 imes$/$6.58 imes$ over an NVIDIA RTX 4090, and is $4.06 imes$ more energy-efficient and $2.19 imes$ faster than a feature-axis HDC ASIC baseline.