🤖 AI Summary
End-to-end spoken language models (SLMs) suffer from high response latency (~725 ms), primarily due to autoregressive speech token generation and reliance on computationally expensive flow-matching models. To address this, we propose an integrated framework combining multi-codebook speech tokenization with autoregressive multi-token prediction. Specifically, we design a lightweight multi-codebook tokenizer enabling fine-grained speech representation, and introduce an autoregressive multi-token prediction mechanism that directly generates multiple speech tokens—eliminating flow-matching or diffusion modules entirely. Our approach reduces first-token latency to 350 ms (a 52% reduction) while maintaining state-of-the-art performance across mainstream benchmarks—including ASR, TTS, and speech understanding tasks. To our knowledge, this is the first work achieving both high efficiency and high quality in end-to-end SLMs under strict low-latency constraints (<400 ms).
📝 Abstract
Current end-to-end spoken language models (SLMs) have made notable progress, yet they still encounter considerable response latency. This delay primarily arises from the autoregressive generation of speech tokens and the reliance on complex flow-matching models for speech synthesis. To overcome this, we introduce VocalNet-M2, a novel low-latency SLM that integrates a multi-codebook tokenizer and a multi-token prediction (MTP) strategy. Our model directly generates multi-codebook speech tokens, thus eliminating the need for a latency-inducing flow-matching model. Furthermore, our MTP strategy enhances generation efficiency and improves overall performance. Extensive experiments demonstrate that VocalNet-M2 achieves a substantial reduction in first chunk latency (from approximately 725ms to 350ms) while maintaining competitive performance across mainstream SLMs. This work also provides a comprehensive comparison of single-codebook and multi-codebook strategies, offering valuable insights for developing efficient and high-performance SLMs for real-time interactive applications.