🤖 AI Summary
To address real-time sequential modeling demands on edge devices, this work tackles computational and energy bottlenecks of linear RNNs in audio/video stream processing. We propose a hardware-aware unstructured sparsification framework tailored for linear RNNs. We first identify—empirically and for the first time—that highly sparse linear RNNs consistently dominate their dense counterparts on the accuracy–efficiency Pareto frontier under equivalent precision. Integrating fixed-point quantization with neuromorphic hardware (Intel Loihi 2), we co-design a sparse inference engine that closes the full-stack loop from algorithmic compression to physical energy efficiency. Evaluated on an edge GPU baseline, our method achieves 42× lower latency and 149× reduced energy consumption. In real-time audio denoising, it attains state-of-the-art performance while halving computational cost and reducing memory footprint by 36%.
📝 Abstract
Linear recurrent neural networks enable powerful long-range sequence modeling with constant memory usage and time-per-token during inference. These architectures hold promise for streaming applications at the edge, but deployment in resource-constrained environments requires hardware-aware optimizations to minimize latency and energy consumption. Unstructured sparsity offers a compelling solution, enabling substantial reductions in compute and memory requirements--when accelerated by compatible hardware platforms. In this paper, we conduct a scaling study to investigate the Pareto front of performance and efficiency across inference compute budgets. We find that highly sparse linear RNNs consistently achieve better efficiency-performance trade-offs than dense baselines, with 2x less compute and 36% less memory at iso-accuracy. Our models achieve state-of-the-art results on a real-time streaming task for audio denoising. By quantizing our sparse models to fixed-point arithmetic and deploying them on the Intel Loihi 2 neuromorphic chip for real-time processing, we translate model compression into tangible gains of 42x lower latency and 149x lower energy consumption compared to a dense model on an edge GPU. Our findings showcase the transformative potential of unstructured sparsity, paving the way for highly efficient recurrent neural networks in real-world, resource-constrained environments.