Multilingual Open QA on the MIA Shared Task

📅 2025-01-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of zero-shot cross-lingual information retrieval (CLIR) for low-resource languages, this paper proposes a zero-shot re-ranking method that requires no target-language labeled data, fine-tuning, or additional training. The method leverages multilingual pretrained language models to perform re-ranking of initial sparse retrieval results (e.g., from BM25) via reverse conditional probability modeling—specifically, estimating the probability of generating the target-language query given a source-language passage. This approach enables fully general, training-free, and plug-and-play cross-lingual re-ranking. Empirical evaluation demonstrates substantial improvements in paragraph retrieval accuracy for zero-shot multilingual open-domain question answering, with consistent gains observed even on low-resource languages such as Telugu. The method thus establishes a novel paradigm for CLIR in resource-constrained settings, offering strong generalization without language-specific adaptation.

Technology Category

Application Category

📝 Abstract
Cross-lingual information retrieval (CLIR) ~cite{shi2021cross, asai2021one, jiang2020cross} for example, can find relevant text in any language such as English(high resource) or Telugu (low resource) even when the query is posed in a different, possibly low-resource, language. In this work, we aim to develop useful CLIR models for this constrained, yet important, setting where we do not require any kind of additional supervision or labelled data for retrieval task and hence can work effectively for low-resource languages. par We propose a simple and effective re-ranking method for improving passage retrieval in open question answering. The re-ranker re-scores retrieved passages with a zero-shot multilingual question generation model, which is a pre-trained language model, to compute the probability of the input question in the target language conditioned on a retrieved passage, which can be possibly in a different language. We evaluate our method in a completely zero shot setting and doesn't require any training. Thus the main advantage of our method is that our approach can be used to re-rank results obtained by any sparse retrieval methods like BM-25. This eliminates the need for obtaining expensive labelled corpus required for the retrieval tasks and hence can be used for low resource languages.
Problem

Research questions and friction points this paper is trying to address.

Cross-lingual Information Retrieval
Low-resource Languages
Unsupervised Learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Cross-lingual Information Retrieval
Pre-trained Multilingual Model
Resource-poor Languages
🔎 Similar Papers
No similar papers found.