🤖 AI Summary
In autoregressive speech-text pretraining, speech token sequences are significantly longer than text sequences, causing computational load imbalance, cross-modal alignment difficulty, and slow model scaling. To address this, we propose the Latent Speech-Text Transformer (LST), which maps speech into compact latent representations via vector quantization and introduces a dynamic aggregation mechanism to adaptively group redundant speech tokens into semantically coherent latent speech blocks. This enables efficient length- and granularity-level alignment between speech and text units. LST constructs a unified joint representation space while preserving autoregressive modeling capability, substantially improving data and computational efficiency. Experiments demonstrate that LST achieves +6.5% speech accuracy on cross-modal tasks (e.g., HellaSwag) under compute constraints and +5.3% under data constraints, while also enhancing text understanding performance—validating its effectiveness and scalability for speech-text joint modeling.
📝 Abstract
Auto-regressive speech-text models are typically pre-trained on a large number of interleaved sequences of text tokens and raw speech encoded as speech tokens using vector quantization. These models have demonstrated state-of-the-art performance in speech-to-speech understanding and generation benchmarks, together with promising scaling laws, primarily enabled by the representational alignment between text and speech. Nevertheless, they suffer from shortcomings, partly owing to the disproportionately longer sequences of speech tokens in contrast to textual tokens. This results in a large compute imbalance between modalities during pre-training as well as during inference, and a potential hindrance to effectively aligning speech and text, ultimately translating to several orders of magnitude slower scaling laws. We introduce the Latent Speech-Text Transformer (LST), which makes pre-training speech-text models more data-efficient by dynamically and inexpensively aggregating speech tokens into latent speech patches. These patches serve as higher-level units that can either align with corresponding textual units to aid capability transfer or even encapsulate common speech sequences like silences to be more compute-efficient. We show that LST outperforms vanilla approaches on speech-to-speech as well as text-to-text benchmarks in both data- and compute-controlled settings, the former indicating more effective representational alignment and the latter indicating steeper scaling laws for speech-text models. On HellaSwag story completion, LST achieves 6.5% absolute gain in speech accuracy under compute-controlled training and 5.3% under data-controlled training, while also improving text performance. We will release our models, code, and the evaluation data to facilitate further research.