PART: Progressive Alignment Representation Training for Multilingual Speech-To-Text with LLMs

📅 2025-09-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Aligning speech and text representations in multilingual speech-to-text (ST) tasks remains challenging, and existing methods that enforce premature cross-lingual convergence often degrade language-specific characteristics. Method: This paper proposes PART, a Progressive Alignment framework that decouples intra-lingual and inter-lingual alignment. PART dynamically activates large language model parameters across multiple training stages, progressively optimizing monolingual representation quality before enforcing cross-lingual semantic consistency—thereby avoiding premature convergence. It integrates text-auxiliary tasks, multi-task learning, and staged training to enhance semantic understanding. Contribution/Results: PART achieves state-of-the-art performance across four major benchmarks—CommonVoice 15, Fleurs, Wenetspeech, and CoVoST2—outperforming prevailing approaches. Empirical results demonstrate its effectiveness and generalizability in improving cross-lingual generalization while preserving language specificity.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have expanded from text to speech, giving rise to Speech Large Models (SLMs) that support recognition, translation, and synthesis. A key challenge is aligning speech and text representations, which becomes harder in multilingual settings. Existing methods often freeze LLM parameters and train encoders on multilingual data, but this forces cross-language convergence and limits performance. We introduce Progressive Alignment Representation Training (PART), a multi-stage and multi-task framework that separates within-language from cross-language alignment. During cross-language training, LLM parameters are dynamically activated, and text-based tasks are later introduced to enhance multilingual understanding. Experiments on CommonVoice 15, Fleurs, Wenetspeech, and CoVoST2 show that PART surpasses conventional approaches, with analysis confirming its ability to balance language-specific distinctions and cross-language generalization. These results demonstrate PART's effectiveness and generality for multilingual speech modality alignment.
Problem

Research questions and friction points this paper is trying to address.

Aligning speech and text representations in multilingual speech models
Overcoming cross-language convergence limitations in existing methods
Balancing language-specific distinctions with cross-language generalization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Progressive multi-stage multi-task alignment framework
Dynamically activates LLM parameters during training
Separates within-language and cross-language alignment stages
🔎 Similar Papers
No similar papers found.
P
Pei Zhang
Tongyi Lab, Alibaba Group
A
Andong Chen
Tongyi Lab, Alibaba Group
X
Xi Chen
Tongyi Lab, Alibaba Group, The Chinese University of Hong Kong
Baosong Yang
Baosong Yang
Alibaba-inc
Machine LearningLarge Language ModelMachine Translation
Derek F. Wong
Derek F. Wong
Professor, Department of Computer and Information Science, University of Macau
Machine TranslationNeural Machine TranslationNatural Language ProcessingMachine Learning
F
Fei Huang
Tongyi Lab, Alibaba Group