POTSA: A Cross-Lingual Speech Alignment Framework for Low Resource Speech-to-Text Translation

📅 2025-11-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing cross-lingual speech-to-text translation (S2TT) methods neglect semantic commonalities across source languages, limiting performance in low-resource and zero-shot settings. To address this, we propose POTSA—a Parallel Optimal Transport-based cross-lingual speech alignment framework—introducing optimal transport (OT) to low-resource S2TT for the first time. POTSA employs a Q-Former-driven token-level OT constraint, a bias compensation module, and a layer-wise scheduling strategy to progressively align cross-lingual speech representations from coarse- to fine-grained levels. Evaluated on the FLEURS benchmark, POTSA achieves state-of-the-art performance using only 10 hours of parallel speech per language: it improves average BLEU by 0.93 points across five high-resource languages and by 5.05 points on zero-shot languages. These gains demonstrate significantly enhanced multilingual semantic consistency and generalization capability.

Technology Category

Application Category

📝 Abstract
Speech Large Language Models (SpeechLLMs) have achieved breakthroughs in multilingual speech-to-text translation (S2TT). However, existing approaches often overlook semantic commonalities across source languages, leading to biased translation performance. In this work, we propose extbf{POTSA} (Parallel Optimal Transport for Speech Alignment), a new framework based on cross-lingual parallel speech pairs and Optimal Transport (OT), designed to bridge high- and low-resource translation gaps. First, we introduce a Bias Compensation module to coarsely align initial speech representations across languages. Second, we impose token-level OT constraints on a Q-Former using parallel speech pairs to establish fine-grained consistency of representations. Then, we apply a layer scheduling strategy to focus OT constraints on the most semantically beneficial layers. Experiments on the FLEURS dataset show that our method achieves SOTA performance, with +0.93 BLEU on average over five common languages and +5.05 BLEU on zero-shot languages, using only 10 hours of parallel speech per source language.
Problem

Research questions and friction points this paper is trying to address.

Addresses biased translation in low-resource multilingual speech-to-text systems
Bridges performance gaps between high- and low-resource language translations
Improves cross-lingual semantic alignment using parallel speech pairs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Cross-lingual speech alignment using Optimal Transport
Bias Compensation module for coarse representation alignment
Token-level OT constraints with layer scheduling strategy