VocalNet-M2: Advancing Low-Latency Spoken Language Modeling via Integrated Multi-Codebook Tokenization and Multi-Token Prediction

📅 2025-11-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
End-to-end spoken language models (SLMs) suffer from high response latency (~725 ms), primarily due to autoregressive speech token generation and reliance on computationally expensive flow-matching models. To address this, we propose an integrated framework combining multi-codebook speech tokenization with autoregressive multi-token prediction. Specifically, we design a lightweight multi-codebook tokenizer enabling fine-grained speech representation, and introduce an autoregressive multi-token prediction mechanism that directly generates multiple speech tokens—eliminating flow-matching or diffusion modules entirely. Our approach reduces first-token latency to 350 ms (a 52% reduction) while maintaining state-of-the-art performance across mainstream benchmarks—including ASR, TTS, and speech understanding tasks. To our knowledge, this is the first work achieving both high efficiency and high quality in end-to-end SLMs under strict low-latency constraints (<400 ms).

Technology Category

Application Category

📝 Abstract
Current end-to-end spoken language models (SLMs) have made notable progress, yet they still encounter considerable response latency. This delay primarily arises from the autoregressive generation of speech tokens and the reliance on complex flow-matching models for speech synthesis. To overcome this, we introduce VocalNet-M2, a novel low-latency SLM that integrates a multi-codebook tokenizer and a multi-token prediction (MTP) strategy. Our model directly generates multi-codebook speech tokens, thus eliminating the need for a latency-inducing flow-matching model. Furthermore, our MTP strategy enhances generation efficiency and improves overall performance. Extensive experiments demonstrate that VocalNet-M2 achieves a substantial reduction in first chunk latency (from approximately 725ms to 350ms) while maintaining competitive performance across mainstream SLMs. This work also provides a comprehensive comparison of single-codebook and multi-codebook strategies, offering valuable insights for developing efficient and high-performance SLMs for real-time interactive applications.
Problem

Research questions and friction points this paper is trying to address.

Reduces latency in spoken language models
Eliminates need for flow-matching speech synthesis
Enhances generation efficiency via multi-token prediction
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrated multi-codebook tokenizer for speech tokens
Multi-token prediction strategy enhances generation efficiency
Eliminates latency-inducing flow-matching speech synthesis model
🔎 Similar Papers
No similar papers found.
Y
Yuhao Wang
Shanghai Jiao Tong University
Ziyang Cheng
Ziyang Cheng
University of Electronic Science and Technology of China
Heyang Liu
Heyang Liu
Shanghai Jiao Tong University
ASRMultimodal understanding
R
Ronghua Wu
Ant Group
Q
Qunshan Gu
Ant Group
Yanfeng Wang
Yanfeng Wang
Shanghai Jiao Tong University
Y
Yu Wang
Shanghai Jiao Tong University