🤖 AI Summary
Quantizing compact state space models (SSMs) for edge AI suffers from severe accuracy degradation and unreliable sub-8-bit deployment. To address this, we propose a heterogeneous quantization strategy coupled with a QAT-stabilized parameterization method, enabling dynamic, component-wise precision allocation guided by sensitivity analysis. We systematically evaluate PTQ and QAT co-optimization pathways on the S4D architecture, achieving, for the first time, reliable SSM deployment across 4–8-bit mixed-precision configurations: seq-MNIST accuracy improves from 40% to 96%, while memory footprint is reduced by 6×. This work establishes the first complete, reproducible, and efficient quantization framework for edge-deployable SSMs, significantly advancing the practical adoption of low-bit sequence modeling models.
📝 Abstract
State-space models (SSMs) have recently gained attention in deep learning for their ability to efficiently model long-range dependencies, making them promising candidates for edge-AI applications. In this paper, we analyze the effects of quantization on small-scale SSMs with a focus on reducing memory and computational costs while maintaining task performance. Using the S4D architecture, we first investigate post-training quantization (PTQ) and show that the state matrix A and internal state x are particularly sensitive to quantization. Furthermore, we analyze the impact of different quantization techniques applied to the parameters and activations in the S4D architecture. To address the observed performance drop after Post-training Quantization (PTQ), we apply Quantization-aware Training (QAT), significantly improving performance from 40% (PTQ) to 96% on the sequential MNIST benchmark at 8-bit precision. We further demonstrate the potential of QAT in enabling sub-8-bit precisions and evaluate different parameterization schemes for QAT stability. Additionally, we propose a heterogeneous quantization strategy that assigns different precision levels to model components, reducing the overall memory footprint by a factor of 6x without sacrificing performance. Our results provide actionable insights for deploying quantized SSMs in resource-constrained environments.