Multilingual Information Retrieval with a Monolingual Knowledge Base

📅 2025-06-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the challenge of effectively leveraging monolingual knowledge bases for low-resource languages in cross-lingual knowledge transfer. We propose a language-agnostic weighted sampling contrastive learning fine-tuning method: multilingual sentences are jointly mapped into the embedding space of a pivot language (e.g., English) without requiring translation or bilingual alignment, enabling cross-lingual semantic alignment. The method is inherently compatible with multilingual mixing and code-switching scenarios and relies solely on monolingual knowledge bases. Evaluated on cross-lingual information retrieval, it achieves a 31.03% improvement in MRR and a 33.98% gain in Recall@3 over standard sampling strategies. These results demonstrate substantial gains in retrieval effectiveness for low-resource languages. Our approach establishes a new paradigm—efficient, lightweight, and scalable—for cross-lingual knowledge acquisition, circumventing reliance on parallel data or resource-intensive translation pipelines.

Technology Category

Application Category

📝 Abstract
Multilingual information retrieval has emerged as powerful tools for expanding knowledge sharing across languages. On the other hand, resources on high quality knowledge base are often scarce and in limited languages, therefore an effective embedding model to transform sentences from different languages into a feature vector space same as the knowledge base language becomes the key ingredient for cross language knowledge sharing, especially to transfer knowledge available in high-resource languages to low-resource ones. In this paper we propose a novel strategy to fine-tune multilingual embedding models with weighted sampling for contrastive learning, enabling multilingual information retrieval with a monolingual knowledge base. We demonstrate that the weighted sampling strategy produces performance gains compared to standard ones by up to 31.03% in MRR and up to 33.98% in Recall@3. Additionally, our proposed methodology is language agnostic and applicable for both multilingual and code switching use cases.
Problem

Research questions and friction points this paper is trying to address.

Enabling cross-language knowledge sharing with monolingual knowledge base
Improving multilingual embedding models for information retrieval
Transferring knowledge from high-resource to low-resource languages
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fine-tune multilingual embedding models
Weighted sampling for contrastive learning
Enable retrieval with monolingual knowledge base