Modular Sentence Encoders: Separating Language Specialization from Cross-Lingual Alignment

📅 2024-07-20
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Multilingual sentence encoders face two key challenges: (1) the “multilingual curse” arising from parameter sharing, which degrades monolingual representation quality; and (2) conflicting optimization objectives between cross-lingual alignment and monolingual semantic structure learning. To address these, we propose a modular decoupled training paradigm: first, language-specific encoders are trained independently to eliminate multilingual interference; second, lightweight, plug-and-play cross-lingual alignment adapters are introduced to map non-English encoders into the English semantic space. This work pioneers a separation-of-concerns architecture that explicitly disentangles language specialization from cross-lingual alignment, thereby circumventing the pitfalls of shared-parameter designs. Our approach consistently outperforms strong baselines—including mBERT, XLM-R, and LaBSE—on semantic textual similarity and multiple-choice question answering. Notably, it delivers substantial gains for low-resource languages in both monolingual and cross-lingual settings, achieving simultaneous improvement across both evaluation axes.

Technology Category

Application Category

📝 Abstract
Multilingual sentence encoders are commonly obtained by training multilingual language models to map sentences from different languages into a shared semantic space. As such, they are subject to curse of multilinguality, a loss of monolingual representational accuracy due to parameter sharing. Another limitation of multilingual sentence encoders is the trade-off between monolingual and cross-lingual performance. Training for cross-lingual alignment of sentence embeddings distorts the optimal monolingual structure of semantic spaces of individual languages, harming the utility of sentence embeddings in monolingual tasks. In this work, we address both issues by modular training of sentence encoders, i.e., by separating monolingual specialization from cross-lingual alignment. We first efficiently train language-specific sentence encoders to avoid negative interference between languages (i.e., the curse). We then align all non-English monolingual encoders to the English encoder by training a cross-lingual alignment adapter on top of each, preventing interference with monolingual specialization from the first step. In both steps, we resort to contrastive learning on machine-translated paraphrase data. Monolingual and cross-lingual evaluations on semantic text similarity/relatedness and multiple-choice QA render our modular solution more effective than multilingual sentence encoders, especially benefiting low-resource languages.
Problem

Research questions and friction points this paper is trying to address.

Addressing curse of multilinguality in sentence encoders
Resolving trade-off between cross-lingual and monolingual task performance
Improving modular training for balanced multilingual embedding quality
Innovation

Methods, ideas, or system contributions that make the work stand out.

Modular training of language-specific monolingual modules
Cross-lingual alignment adapters for embedding alignment
Dual-data training for cross-lingual task optimization
🔎 Similar Papers
No similar papers found.