Towards Fine-Grained Code-Switch Speech Translation with Semantic Space Alignment

📅 2025-11-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Code-switched (CS) speech translation faces dual challenges: complex semantic modeling and scarcity of annotated CS data. To address these, we propose a Mixture-of-Experts (MoE)-based speech projector that constructs language-specific semantic subspaces. Our method employs a multi-stage training paradigm: (1) pretraining on monolingual ASR and speech-to-text (ST) data; and (2) fine-tuning with language-specific loss, intra-group load-balancing loss, and transition loss—enabling fine-grained speech–text semantic alignment without requiring CS-labeled data. Experiments demonstrate significant improvements over strong baselines across multiple mainstream CS-ST benchmarks. The approach exhibits strong generalization capability and low dependency on labeled CS data, offering a novel, resource-efficient paradigm for cross-lingual speech translation in low-resource settings.

Technology Category

Application Category

📝 Abstract
Code-switching (CS) speech translation (ST) refers to translating speech that alternates between two or more languages into a target language text, which poses significant challenges due to the complexity of semantic modeling and the scarcity of CS data. Previous studies tend to rely on the model itself to implicitly learn semantic modeling during training, and resort to inefficient and costly manual annotations for these two challenges. To mitigate these limitations, we propose enhancing Large Language Models (LLMs) with a Mixture of Experts (MoE) speech projector, where each expert specializes in the semantic subspace of a specific language, enabling fine-grained modeling of speech features. Additionally, we introduce a multi-stage training paradigm that utilizes readily available monolingual automatic speech recognition (ASR) and monolingual ST data, facilitating speech-text alignment and improving translation capabilities. During training, we leverage a combination of language-specific loss and intra-group load balancing loss to guide the MoE speech projector in efficiently allocating tokens to the appropriate experts, across expert groups and within each group, respectively. To bridge the data gap across different training stages and improve adaptation to the CS scenario, we further employ a transition loss, enabling smooth transitions of data between stages, to effectively address the scarcity of high-quality CS speech translation data. Extensive experiments on widely used datasets demonstrate the effectiveness and generality of our approach.
Problem

Research questions and friction points this paper is trying to address.

Translating code-switching speech with multiple languages into target text
Addressing semantic modeling complexity and data scarcity in speech translation
Improving fine-grained modeling of multilingual speech features using MoE
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mixture of Experts speech projector for fine-grained modeling
Multi-stage training with monolingual ASR and ST data
Language-specific and load balancing losses for expert allocation
🔎 Similar Papers
No similar papers found.
Y
Yan Gao
School of Informatics, Xiamen University, China
Yazheng Yang
Yazheng Yang
Department of Computer Science, Hong Kong University
Zhibin Lan
Zhibin Lan
Xiamen University
Natural Language Processing
Y
Yidong Chen
School of Informatics, Xiamen University, China
M
Min Zhang
Huawei Translation Services Center, Beijing, China
D
Daimeng Wei
Huawei Translation Services Center, Beijing, China
H
Hui Huang
NLP2CT Lab, Department of Computer and Information Science, University of Macau
Jinsong Su
Jinsong Su
Xiamen University
Natural Language ProcessingDeep LearningNeural Machine Translation