🤖 AI Summary
Multilingual sentence encoders face two key challenges: (1) the “multilingual curse” arising from parameter sharing, which degrades monolingual representation quality; and (2) conflicting optimization objectives between cross-lingual alignment and monolingual semantic structure learning. To address these, we propose a modular decoupled training paradigm: first, language-specific encoders are trained independently to eliminate multilingual interference; second, lightweight, plug-and-play cross-lingual alignment adapters are introduced to map non-English encoders into the English semantic space. This work pioneers a separation-of-concerns architecture that explicitly disentangles language specialization from cross-lingual alignment, thereby circumventing the pitfalls of shared-parameter designs. Our approach consistently outperforms strong baselines—including mBERT, XLM-R, and LaBSE—on semantic textual similarity and multiple-choice question answering. Notably, it delivers substantial gains for low-resource languages in both monolingual and cross-lingual settings, achieving simultaneous improvement across both evaluation axes.
📝 Abstract
Multilingual sentence encoders are commonly obtained by training multilingual language models to map sentences from different languages into a shared semantic space. As such, they are subject to curse of multilinguality, a loss of monolingual representational accuracy due to parameter sharing. Another limitation of multilingual sentence encoders is the trade-off between monolingual and cross-lingual performance. Training for cross-lingual alignment of sentence embeddings distorts the optimal monolingual structure of semantic spaces of individual languages, harming the utility of sentence embeddings in monolingual tasks. In this work, we address both issues by modular training of sentence encoders, i.e., by separating monolingual specialization from cross-lingual alignment. We first efficiently train language-specific sentence encoders to avoid negative interference between languages (i.e., the curse). We then align all non-English monolingual encoders to the English encoder by training a cross-lingual alignment adapter on top of each, preventing interference with monolingual specialization from the first step. In both steps, we resort to contrastive learning on machine-translated paraphrase data. Monolingual and cross-lingual evaluations on semantic text similarity/relatedness and multiple-choice QA render our modular solution more effective than multilingual sentence encoders, especially benefiting low-resource languages.