🤖 AI Summary
This work addresses the challenge of jointly embedding text, speech, code, and mathematical expressions across more than a thousand high- and low-resource languages within a unified semantic space—a task where existing cross-lingual sentence encoders struggle to balance alignment strength with downstream performance. The authors propose the OmniSONAR family of models, leveraging an LLM-initialized encoder-decoder architecture, staged teacher-student distillation, a split-softmax contrastive loss, and synthetically generated hard negatives. OmniSONAR achieves high-quality alignment across 1,560 languages and multiple modalities, reducing cross-lingual retrieval errors by 50% on FLORES and 93% on BIBLE, outperforming NLLB-3B by 15 chrF++ points in translation quality, and attaining 97% of SeamlessM4T’s speech-to-text performance.
📝 Abstract
Cross-lingual sentence encoders typically cover only a few hundred languages and often trade downstream quality for stronger alignment, limiting their adoption. We introduce OmniSONAR, a new family of omnilingual, cross-lingual and cross-modal sentence embedding models that natively embed text, speech, code, and mathematical expressions in a single semantic space, while delivering state-of-the-art downstream performance at the scale of thousands of languages, from high-resource to extremely low-resource varieties. To reach this scale without representation collapse, we use progressive training. We first learn a strong foundational space for 200 languages with an LLM-initialized encoder-decoder, combining token-level decoding with a novel split-softmax contrastive loss and synthetic hard negatives. Building on this foundation, we expand to several thousands language varieties via a two-stage teacher-student encoder distillation framework. Finally, we demonstrate the cross-modal extensibility of this space by seamlessly mapping 177 spoken languages into it. OmniSONAR halves cross-lingual similarity search error on the 200-language FLORES dataset and reduces error by a factor of 15 on the 1,560-language BIBLE benchmark. It also enables strong translation, outperforming NLLB-3B on multilingual benchmarks and exceeding prior models (including much larger LLMs) by 15 chrF++ points on 1,560 languages into English BIBLE translation. OmniSONAR also performs strongly on MTEB and XLCoST. For speech, OmniSONAR achieves a 43% lower similarity-search error and reaches 97% of SeamlessM4T speech-to-text quality, despite being zero-shot for translation (trained only on ASR data). Finally, by training an encoder-decoder LM, Spectrum, exclusively on English text processing OmniSONAR embedding sequences, we unlock high-performance transfer to thousands of languages and speech for complex downstream tasks.