Mixture of Experts Approaches in Dense Retrieval Tasks

📅 2025-10-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Weak cross-task and cross-domain generalization of dense retrieval models (DRMs) limits their applicability in zero-shot settings. To address this, we propose SB-MoE—a lightweight, single-block Mixture-of-Experts architecture inserted into the final Transformer layer—designed to enhance adaptability and zero-shot transfer without significantly increasing parameter count. Evaluated on diverse backbone models—including TinyBERT, BERT-Small, BERT-Base, and Contriever—SB-MoE consistently outperforms standard fine-tuning across seven supervised retrieval benchmarks and achieves substantial gains in average NDCG@10 across 13 BEIR zero-shot datasets, with particularly pronounced improvements for lightweight models. Our core contribution lies in distilling the MoE mechanism into a single-block design that delivers strong generalization at minimal computational and parametric overhead, establishing a new paradigm for efficient, transferable dense retrieval.

Technology Category

Application Category

📝 Abstract
Dense Retrieval Models (DRMs) are a prominent development in Information Retrieval (IR). A key challenge with these neural Transformer-based models is that they often struggle to generalize beyond the specific tasks and domains they were trained on. To address this challenge, prior research in IR incorporated the Mixture-of-Experts (MoE) framework within each Transformer layer of a DRM, which, though effective, substantially increased the number of additional parameters. In this paper, we propose a more efficient design, which introduces a single MoE block (SB-MoE) after the final Transformer layer. To assess the retrieval effectiveness of SB-MoE, we perform an empirical evaluation across three IR tasks. Our experiments involve two evaluation setups, aiming to assess both in-domain effectiveness and the model's zero-shot generalizability. In the first setup, we fine-tune SB-MoE with four different underlying DRMs on seven IR benchmarks and evaluate them on their respective test sets. In the second setup, we fine-tune SB-MoE on MSMARCO and perform zero-shot evaluation on thirteen BEIR datasets. Additionally, we perform further experiments to analyze the model's dependency on its hyperparameters (i.e., the number of employed and activated experts) and investigate how this variation affects SB-MoE's performance. The obtained results show that SB-MoE is particularly effective for DRMs with lightweight base models, such as TinyBERT and BERT-Small, consistently exceeding standard model fine-tuning across benchmarks. For DRMs with more parameters, such as BERT-Base and Contriever, our model requires a larger number of training samples to achieve improved retrieval performance. Our code is available online at: https://github.com/FaySokli/SB-MoE.
Problem

Research questions and friction points this paper is trying to address.

Enhancing dense retrieval models' generalization beyond training domains
Reducing parameter inefficiency in mixture-of-experts retrieval systems
Improving zero-shot performance across multiple information retrieval tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces single MoE block after final Transformer layer
Uses Mixture-of-Experts framework for dense retrieval models
Enables efficient parameter usage while maintaining performance
🔎 Similar Papers
No similar papers found.