🤖 AI Summary
Current machine translation systems cover only ~5% of the world’s languages, severely limiting translation capabilities for low-resource, long-tail languages. To address this, we propose a data-efficient cross-lingual and cross-modal transfer framework that introduces, for the first time, a character-level unified encoder jointly optimized with a lightweight speech adapter—enabling unified modeling of text and speech inputs and zero-shot translation across >1,000 languages. Our method integrates SONAR multilingual embeddings, Meta’s MMS automatic speech recognition (ASR) model, teacher-student knowledge distillation, and an ASR-driven speech adaptation mechanism. Experiments demonstrate substantial improvements: on FLORES+ (75 languages), our approach significantly outperforms subword-based baselines, especially for low-resource languages; on FLEURS (33 languages) speech-to-text translation, it achieves state-of-the-art zero-shot performance, surpassing both supervised and cascaded methods.
📝 Abstract
Current translation systems, despite being highly multilingual, cover only 5% of the world's languages. Expanding language coverage to the long-tail of low-resource languages requires data-efficient methods that rely on cross-lingual and cross-modal knowledge transfer. To this end, we propose a character-based approach to improve adaptability to new languages and modalities. Our method leverages SONAR, a multilingual fixed-size embedding space with different modules for encoding and decoding. We use a teacher-student approach with parallel translation data to obtain a character-level encoder. Then, using ASR data, we train a lightweight adapter to connect a massively multilingual CTC ASR model (MMS), to the character-level encoder, potentially enabling speech translation from 1,000+ languages. Experimental results in text translation for 75 languages on FLORES+ demonstrate that our character-based approach can achieve better language transfer than traditional subword-based models, especially outperforming them in low-resource settings, and demonstrating better zero-shot generalizability to unseen languages. Our speech adaptation, maximizing knowledge transfer from the text modality, achieves state-of-the-art results in speech-to-text translation on the FLEURS benchmark on 33 languages, surpassing previous supervised and cascade models, albeit being a zero-shot model with minimal supervision from ASR data.