🤖 AI Summary
To address the challenge of jointly modeling long-range temporal dependencies and ensuring computational efficiency in keyword spotting (KWS), this paper pioneers the integration of the state-space model Mamba into KWS, proposing a lightweight end-to-end architecture. Departing from the computationally intensive self-attention mechanism of Transformers, our approach leverages Mamba’s selective state-space modeling to efficiently capture long-term temporal dynamics along the sequence axis. The model is trained end-to-end on the Google Speech Commands dataset. Experimental results demonstrate that our method achieves state-of-the-art accuracy (98.2%) while reducing model parameters by 47% and FLOPs by 63% compared to leading CNN-, RNN-, and Transformer-based baselines. This work validates the efficacy and deployment advantages of state-space models for low-latency, resource-constrained KWS applications, establishing a novel paradigm for efficient sequential modeling in speech processing.
📝 Abstract
Keyword spotting (KWS) is an essential task in speech processing. It is widely used in voice assistants and smart devices. Deep learning models like CNNs, RNNs, and Transformers have performed well in KWS. However, they often struggle to handle long-term patterns and stay efficient at the same time. In this work, we present Keyword Mamba, a new architecture for KWS. It uses a neural state space model (SSM) called Mamba. We apply Mamba along the time axis and also explore how it can replace the self-attention part in Transformer models. We test our model on the Google Speech Commands datasets. The results show that Keyword Mamba reaches strong accuracy with fewer parameters and lower computational cost. To our knowledge, this is the first time a state space model has been used for KWS. These results suggest that Mamba has strong potential in speech-related tasks.