Latent Speech-Text Transformer

📅 2025-10-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In autoregressive speech-text pretraining, speech token sequences are significantly longer than text sequences, causing computational load imbalance, cross-modal alignment difficulty, and slow model scaling. To address this, we propose the Latent Speech-Text Transformer (LST), which maps speech into compact latent representations via vector quantization and introduces a dynamic aggregation mechanism to adaptively group redundant speech tokens into semantically coherent latent speech blocks. This enables efficient length- and granularity-level alignment between speech and text units. LST constructs a unified joint representation space while preserving autoregressive modeling capability, substantially improving data and computational efficiency. Experiments demonstrate that LST achieves +6.5% speech accuracy on cross-modal tasks (e.g., HellaSwag) under compute constraints and +5.3% under data constraints, while also enhancing text understanding performance—validating its effectiveness and scalability for speech-text joint modeling.

Technology Category

Application Category

📝 Abstract
Auto-regressive speech-text models are typically pre-trained on a large number of interleaved sequences of text tokens and raw speech encoded as speech tokens using vector quantization. These models have demonstrated state-of-the-art performance in speech-to-speech understanding and generation benchmarks, together with promising scaling laws, primarily enabled by the representational alignment between text and speech. Nevertheless, they suffer from shortcomings, partly owing to the disproportionately longer sequences of speech tokens in contrast to textual tokens. This results in a large compute imbalance between modalities during pre-training as well as during inference, and a potential hindrance to effectively aligning speech and text, ultimately translating to several orders of magnitude slower scaling laws. We introduce the Latent Speech-Text Transformer (LST), which makes pre-training speech-text models more data-efficient by dynamically and inexpensively aggregating speech tokens into latent speech patches. These patches serve as higher-level units that can either align with corresponding textual units to aid capability transfer or even encapsulate common speech sequences like silences to be more compute-efficient. We show that LST outperforms vanilla approaches on speech-to-speech as well as text-to-text benchmarks in both data- and compute-controlled settings, the former indicating more effective representational alignment and the latter indicating steeper scaling laws for speech-text models. On HellaSwag story completion, LST achieves 6.5% absolute gain in speech accuracy under compute-controlled training and 5.3% under data-controlled training, while also improving text performance. We will release our models, code, and the evaluation data to facilitate further research.
Problem

Research questions and friction points this paper is trying to address.

Addressing compute imbalance between speech and text tokens
Improving alignment efficiency of speech-text representations
Enhancing scaling laws for multimodal speech-text models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Latent Speech-Text Transformer aggregates speech tokens dynamically
Creates latent speech patches for compute efficiency
Improves alignment between speech and text representations
Y
Yen-Ju Lu
Center for Language and Speech Processing, Johns Hopkins University
Yashesh Gaur
Yashesh Gaur
Meta, GenAI, Llama foundation models
Multimodal LLMs
W
Wei Zhou
Meta Superintelligence Labs
Benjamin Muller
Benjamin Muller
Meta Superintelligence Labs
J
J. Villalba
Center for Language and Speech Processing, Johns Hopkins University
N
N. Dehak
Center for Language and Speech Processing, Johns Hopkins University
L
Luke S. Zettlemoyer
Meta Superintelligence Labs
Gargi Ghosh
Gargi Ghosh
Meta AI Research
NLPMulti modalSpeech research
Mike Lewis
Mike Lewis
Facebook AI Research
Natural language processingmachine learninglinguistics
S
Srinivas Iyer
Meta Superintelligence Labs
D
Duc-Cuong Le
Meta Superintelligence Labs