🤖 AI Summary
To address the scarcity of high-quality annotated data for speech translation (ST) in low-resource languages, this paper proposes a weakly supervised end-to-end ST framework. Methodologically, it introduces (i) the first systematic validation of weakly labeled speech–text pairs for low-resource ST; (ii) a cross-lingual speech–text alignment approach leveraging multilingual sentence embeddings; and (iii) an integrated pipeline combining bitext mining, controllable quality-and-quantity-aware data distillation, and an end-to-end Transformer architecture. The framework is evaluated on four Indian language–to–Hindi ST tasks, achieving performance competitive with large-scale multimodal baselines such as SONAR and SeamlessM4T. Crucially, it substantially reduces reliance on costly human annotations. By enabling effective ST training with weak supervision and scalable data curation, the work establishes a practical, low-cost, and extensible paradigm for low-resource ST.
📝 Abstract
The scarcity of high-quality annotated data presents a significant challenge in developing effective end-to-end speech-to-text translation (ST) systems, particularly for low-resource languages. This paper explores the hypothesis that weakly labeled data can be used to build ST models for low-resource language pairs. We constructed speech-to-text translation datasets with the help of bitext mining using state-of-the-art sentence encoders. We mined the multilingual Shrutilipi corpus to build Shrutilipi-anuvaad, a dataset comprising ST data for language pairs Bengali-Hindi, Malayalam-Hindi, Odia-Hindi, and Telugu-Hindi. We created multiple versions of training data with varying degrees of quality and quantity to investigate the effect of quality versus quantity of weakly labeled data on ST model performance. Results demonstrate that ST systems can be built using weakly labeled data, with performance comparable to massive multi-modal multilingual baselines such as SONAR and SeamlessM4T.