🤖 AI Summary
Current speech-language models (SLMs) exhibit strong English bias due to the scarcity of multilingual speech evaluation benchmarks and training data, resulting in poor cross-lingual generalization. To address this, we propose a text-free cross-lingual speech unit interleaving method that leverages discrete speech tokens to achieve semantic alignment and enhance generative capability. We construct and publicly release the first large-scale EN-FR synthetic speech dataset for both training and evaluation—including GPT-4–assisted speech synthesis. Extensive experiments on 360M- and 1B-parameter models demonstrate significant improvements: substantial gains in monolingual semantic accuracy, robust cross-lingual continuation, and markedly improved cross-lingual hidden-state alignment. Our work establishes a reproducible foundation and introduces a novel paradigm for SLM research targeting low-resource languages.
📝 Abstract
Spoken Language Models (SLMs) aim to learn linguistic competence directly from speech using discrete units, widening access to Natural Language Processing (NLP) technologies for languages with limited written resources. However, progress has been largely English-centric due to scarce spoken evaluation benchmarks and training data, making cross-lingual learning difficult. We present a cross-lingual interleaving method that mixes speech tokens across languages without textual supervision. We also release an EN-FR training dataset, TinyStories (~42k hours), together with EN-FR spoken StoryCloze and TopicCloze benchmarks for cross-lingual semantic evaluation, both synthetically generated using GPT-4. On 360M and 1B SLMs under matched training-token budgets, interleaving improves monolingual semantic accuracy, enables robust cross-lingual continuation, and strengthens cross-lingual hidden-state alignment. Taken together, these results indicate that cross-lingual interleaving is a simple, scalable route to building multilingual SLMs that understand and converse across languages. All resources will be made open-source to support reproducibility.