HyDra: SOT-CAM Based Vector Symbolic Macro for Hyperdimensional Computing

📅 2025-04-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing HDC hardware accelerators suffer from poor generality and high latency in encoding and similarity search. This work proposes a通用可重构 on-chip training/inference architecture for hyperdimensional computing, leveraging SOT-MRAM-based content-addressable memory (CAM) to enable in-memory compute for binding, permutation, and similarity search. We introduce a novel fourth-order voltage-scaling scheme to preserve Hamming distance accuracy; replace conventional bit rotation with bit-dropping during read operations; and design an HDC-optimized adder. Compared to CMOS-based HDC implementations, our design reduces energy consumption for addition, permutation, multiplication, and search by 21.5×, 552.7×, 1.45×, and 282.6×, respectively. It achieves 2.27× lower energy than state-of-the-art HD accelerators and delivers 2702× and 23,161× speedup over CPU and eGPU baselines, respectively, with <3% accuracy degradation.

Technology Category

Application Category

📝 Abstract
Hyperdimensional computing (HDC) is a brain-inspired paradigm valued for its noise robustness, parallelism, energy efficiency, and low computational overhead. Hardware accelerators are being explored to further enhance its performance, but current solutions are often limited by application specificity and the latency of encoding and similarity search. This paper presents a generalized, reconfigurable on-chip training and inference architecture for HDC, utilizing spin-orbit-torque magnetic (SOT-MRAM) content-addressable memory (CAM). The proposed SOT-CAM array integrates storage and computation, enabling in-memory execution of key HDC operations: binding (bitwise multiplication), permutation (bit rotation), and efficient similarity search. To mitigate interconnect parasitic effect in similarity search, a four-stage voltage scaling scheme has been proposed to ensure accurate Hamming distance representation. Additionally, a novel bit drop method replaces bit rotation during read operations, and an HDC-specific adder reduces energy and area by 1.51x and 1.43x, respectively. Benchmarked at 7nm, the architecture achieves energy reductions of 21.5x, 552.74x, 1.45x, and 282.57x for addition, permutation, multiplication, and search operations, respectively, compared to CMOS-based HDC. Against state-of-the-art HD accelerators, it achieves a 2.27x lower energy consumption and outperforms CPU and eGPU implementations by 2702x and 23161x, respectively, with less than 3% drop in accuracy
Problem

Research questions and friction points this paper is trying to address.

Develops SOT-CAM based architecture for hyperdimensional computing efficiency
Addresses latency and specificity in HDC encoding and similarity search
Proposes voltage scaling and bit drop to optimize energy and accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

SOT-CAM array integrates storage and computation
Four-stage voltage scaling reduces parasitic effects
Bit drop method and HDC-specific adder save energy
🔎 Similar Papers
No similar papers found.