WavSLM: Single-Stream Speech Language Modeling via WavLM Distillation

📅 2026-03-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of unifying semantic and acoustic modeling in speech without textual supervision or reliance on complex architectures, while supporting single-stream autoregressive generation. The authors propose a speech language model based on a single codebook derived from self-supervised WavLM representations via vector quantization. By autoregressively predicting the next speech unit, the model jointly encodes semantic and acoustic information within a unified token stream. This approach achieves, for the first time, semantic-acoustic unified modeling under purely self-supervised conditions while preserving a single-stream autoregressive paradigm, thereby significantly simplifying model architecture. Experimental results demonstrate competitive performance in speech generation with fewer parameters, reduced training data requirements, and support for efficient streaming inference.

Technology Category

Application Category

📝 Abstract
Large language models show that simple autoregressive training can yield scalable and coherent generation, but extending this paradigm to speech remains challenging due to the entanglement of semantic and acoustic information. Most existing speech language models rely on text supervision, hierarchical token streams, or complex hybrid architectures, departing from the single-stream generative pretraining paradigm that has proven effective in text. In this work, we introduce WavSLM, a speech language model trained by quantizing and distilling self-supervised WavLM representations into a single codebook and optimizing an autoregressive next-chunk prediction objective. WavSLM jointly models semantic and acoustic information within a single token stream without text supervision or text pretraining. Despite its simplicity, it achieves competitive performance on consistency benchmarks and speech generation while using fewer parameters, less training data, and supporting streaming inference. Demo samples are available at https://lucadellalib.github.io/wavslm-web/.
Problem

Research questions and friction points this paper is trying to address.

speech language modeling
single-stream generation
semantic and acoustic entanglement
text-free pretraining
autoregressive speech modeling
Innovation

Methods, ideas, or system contributions that make the work stand out.

single-stream speech modeling
WavLM distillation
autoregressive speech generation
text-free pretraining
unified semantic-acoustic representation
🔎 Similar Papers
No similar papers found.