🤖 AI Summary
Aligning speech and text representations in multilingual speech-to-text (ST) tasks remains challenging, and existing methods that enforce premature cross-lingual convergence often degrade language-specific characteristics. Method: This paper proposes PART, a Progressive Alignment framework that decouples intra-lingual and inter-lingual alignment. PART dynamically activates large language model parameters across multiple training stages, progressively optimizing monolingual representation quality before enforcing cross-lingual semantic consistency—thereby avoiding premature convergence. It integrates text-auxiliary tasks, multi-task learning, and staged training to enhance semantic understanding. Contribution/Results: PART achieves state-of-the-art performance across four major benchmarks—CommonVoice 15, Fleurs, Wenetspeech, and CoVoST2—outperforming prevailing approaches. Empirical results demonstrate its effectiveness and generalizability in improving cross-lingual generalization while preserving language specificity.
📝 Abstract
Large language models (LLMs) have expanded from text to speech, giving rise to Speech Large Models (SLMs) that support recognition, translation, and synthesis. A key challenge is aligning speech and text representations, which becomes harder in multilingual settings. Existing methods often freeze LLM parameters and train encoders on multilingual data, but this forces cross-language convergence and limits performance. We introduce Progressive Alignment Representation Training (PART), a multi-stage and multi-task framework that separates within-language from cross-language alignment. During cross-language training, LLM parameters are dynamically activated, and text-based tasks are later introduced to enhance multilingual understanding. Experiments on CommonVoice 15, Fleurs, Wenetspeech, and CoVoST2 show that PART surpasses conventional approaches, with analysis confirming its ability to balance language-specific distinctions and cross-language generalization. These results demonstrate PART's effectiveness and generality for multilingual speech modality alignment.