End-to-End Speech Translation for Low-Resource Languages Using Weakly Labeled Data

📅 2025-06-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the scarcity of high-quality annotated data for speech translation (ST) in low-resource languages, this paper proposes a weakly supervised end-to-end ST framework. Methodologically, it introduces (i) the first systematic validation of weakly labeled speech–text pairs for low-resource ST; (ii) a cross-lingual speech–text alignment approach leveraging multilingual sentence embeddings; and (iii) an integrated pipeline combining bitext mining, controllable quality-and-quantity-aware data distillation, and an end-to-end Transformer architecture. The framework is evaluated on four Indian language–to–Hindi ST tasks, achieving performance competitive with large-scale multimodal baselines such as SONAR and SeamlessM4T. Crucially, it substantially reduces reliance on costly human annotations. By enabling effective ST training with weak supervision and scalable data curation, the work establishes a practical, low-cost, and extensible paradigm for low-resource ST.

Technology Category

Application Category

📝 Abstract
The scarcity of high-quality annotated data presents a significant challenge in developing effective end-to-end speech-to-text translation (ST) systems, particularly for low-resource languages. This paper explores the hypothesis that weakly labeled data can be used to build ST models for low-resource language pairs. We constructed speech-to-text translation datasets with the help of bitext mining using state-of-the-art sentence encoders. We mined the multilingual Shrutilipi corpus to build Shrutilipi-anuvaad, a dataset comprising ST data for language pairs Bengali-Hindi, Malayalam-Hindi, Odia-Hindi, and Telugu-Hindi. We created multiple versions of training data with varying degrees of quality and quantity to investigate the effect of quality versus quantity of weakly labeled data on ST model performance. Results demonstrate that ST systems can be built using weakly labeled data, with performance comparable to massive multi-modal multilingual baselines such as SONAR and SeamlessM4T.
Problem

Research questions and friction points this paper is trying to address.

Addressing low-resource language speech-to-text translation challenges
Exploring weakly labeled data for building ST models
Evaluating quality versus quantity in weakly labeled datasets
Innovation

Methods, ideas, or system contributions that make the work stand out.

Utilizing weakly labeled data for ST models
Bitext mining with advanced sentence encoders
Creating datasets for low-resource language pairs
🔎 Similar Papers
No similar papers found.
A
Aishwarya Pothula
Speech Processing Laboratory, International Institute of Information Technology, Hyderabad, India
B
Bhavana Akkiraju
Speech Processing Laboratory, International Institute of Information Technology, Hyderabad, India
S
Srihari Bandarupalli
Speech Processing Laboratory, International Institute of Information Technology, Hyderabad, India
D
D. Charan
Speech Processing Laboratory, International Institute of Information Technology, Hyderabad, India
Santosh Kesiraju
Santosh Kesiraju
Brno University of Technology
Speech and language processingMachine learning
A
A. Vuppala
Speech Processing Laboratory, International Institute of Information Technology, Hyderabad, India