An Empirical Analysis of Discrete Unit Representations in Speech Language Modeling Pre-training

📅 2025-09-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the optimization of discrete speech unit representations in Speech Language Models (SLMs) to enhance speech modeling during continual pretraining. We propose a systematic framework encompassing speech encoder design, controllable clustering granularity, phoneme-aligned discretization, and domain-matched data selection. We first uncover a strong correlation between model scale and optimal clustering granularity, and demonstrate that discrete vocabularies explicitly encode both linguistic and paralinguistic structures. Experiments show that principled discretization yields substantial downstream improvements—averaging +2.1% relative WER reduction—and that distributional alignment between clustering data and target domains enhances model robustness and generalization. Our work provides a reproducible methodology and theoretical insights for efficient speech representation learning in SLMs.

Technology Category

Application Category

📝 Abstract
This paper investigates discrete unit representations in Speech Language Models (SLMs), focusing on optimizing speech modeling during continual pre-training. In this paper, we systematically examine how model architecture, data representation, and training robustness influence the pre-training stage in which we adapt existing pre-trained language models to the speech modality. Our experiments highlight the role of speech encoders and clustering granularity across different model scales, showing how optimal discretization strategies vary with model capacity. By examining cluster distribution and phonemic alignments, we investigate the effective use of discrete vocabulary, uncovering both linguistic and paralinguistic patterns. Additionally, we explore the impact of clustering data selection on model robustness, highlighting the importance of domain matching between discretization training and target applications.
Problem

Research questions and friction points this paper is trying to address.

Optimizing discrete unit representations for speech language model pre-training
Examining how model architecture and data representation affect speech adaptation
Investigating optimal discretization strategies across different model capacities
Innovation

Methods, ideas, or system contributions that make the work stand out.

Discrete unit representations optimize speech modeling
Examine model architecture, data representation, training robustness
Speech encoders and clustering granularity across scales
🔎 Similar Papers
No similar papers found.
Y
Yanis Labrak
Laboratoire Informatique d’Avignon, Avignon University, Avignon, France
Richard Dufour
Richard Dufour
LS2N - TALN/NLP research group - Nantes University
Natural language processingBiomedical domainLanguage modelingSpontaneous speech
M
Mickaël Rouvier
Laboratoire Informatique d’Avignon, Avignon University, Avignon, France