🤖 AI Summary
Large language models (LLMs), pretrained exclusively on text, struggle to adapt to speech modalities, limiting speech-to-speech translation (S2ST) performance. To address this, we propose a scheduler-based interleaved speech-text training paradigm. Our approach features: (1) a novel word-level aligned interleaving mechanism that jointly encodes discrete speech units and textual tokens; and (2) a progressive text-ratio decay strategy to enable controllable cross-modal transfer. Built upon the LLaMA3.2-1B architecture and fine-tuned on the CVSS dataset, our method significantly improves S2ST quality—especially for low-resource languages—yielding substantial gains in BLEU (+4.2) and COMET (+6.8) scores. Moreover, it enhances modality adaptation efficiency and cross-lingual generalization capability without requiring additional speech-specific architectural modifications or external alignment tools.
📝 Abstract
Speech-to-speech translation (S2ST) has been advanced with large language models (LLMs), which are fine-tuned on discrete speech units. In such approaches, modality adaptation from text to speech has been an issue. LLMs are trained on text-only data, which presents challenges to adapt them to speech modality with limited speech-to-speech data. To address the training difficulty, we propose scheduled interleaved speech--text training in this study. We use interleaved speech--text units instead of speech units during training, where aligned text tokens are interleaved at the word level. We gradually decrease the ratio of text as training progresses, to facilitate progressive modality adaptation from text to speech. We conduct experimental evaluations by fine-tuning LLaMA3.2-1B for S2ST on the CVSS dataset. We show that the proposed method consistently improves the translation performances, especially for languages with limited training data.