🤖 AI Summary
Edge devices struggle with efficient long-sequence processing due to the prohibitively high memory overhead of standard Transformers and the GPU dependency and excessive resource consumption of S4-like models. Method: This work pioneers the adaptation of the S4D state-space model to memristive in-memory computing (IMC) hardware, introducing a quantization-aware training framework tailored to analog-domain non-idealities—enabling high-accuracy ternary weight mapping and hardware-software co-optimization. Crucially, it maps the S4 kernel’s recursive computation directly onto the analog IMC architecture, eliminating GPU reliance. Contribution/Results: Experimental evaluation demonstrates substantial reductions in both memory footprint and computational demand, while maintaining high accuracy on lightweight long-sequence tasks. The approach validates the feasibility of real-time, on-device long-sequence modeling at the edge.
📝 Abstract
Processing long temporal sequences is a key challenge in deep learning. In recent years, Transformers have become state-of-the-art for this task, but suffer from excessive memory requirements due to the need to explicitly store the sequences. To address this issue, structured state-space sequential (S4) models recently emerged, offering a fixed memory state while still enabling the processing of very long sequence contexts. The recurrent linear update of the state in these models makes them highly efficient on modern graphics processing units (GPU) by unrolling the recurrence into a convolution. However, this approach demands significant memory and massively parallel computation, which is only available on the latest GPUs. In this work, we aim to bring the power of S4 models to edge hardware by significantly reducing the size and computational demand of an S4D model through quantization-aware training, even achieving ternary weights for a simple real-world task. To this end, we extend conventional quantization-aware training to tailor it for analog in-memory compute hardware. We then demonstrate the deployment of recurrent S4D kernels on memrisitve crossbar arrays, enabling their computation in an in-memory compute fashion. To our knowledge, this is the first implementation of S4 kernels on in-memory compute hardware.