Align then Train: Efficient Retrieval Adapter Learning

📅 2026-04-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the semantic gap between complex queries and simple documents, as well as the high computational cost of fine-tuning large embedding models, by proposing the Efficient Retrieval Adapter (ERA) framework. ERA introduces, for the first time, the pretrain–finetune paradigm from large language models into retrieval adapter learning through a two-stage strategy: it first aligns the embedding spaces of a strong query encoder and a lightweight document encoder via self-supervised learning, then performs query-side supervised adaptation using only a small amount of labeled data—without requiring document re-indexing. Evaluated on the MAIR benchmark spanning six domains and 126 tasks, ERA significantly outperforms annotation-intensive methods under low-label settings, effectively bridging the representation gap and enhancing cross-domain retrieval performance.
📝 Abstract
Dense retrieval systems increasingly need to handle complex queries. In many realistic settings, users express intent through long instructions or task-specific descriptions, while target documents remain relatively simple and static. This asymmetry creates a retrieval mismatch: understanding queries may require strong reasoning and instruction-following, whereas efficient document indexing favors lightweight encoders. Existing retrieval systems often address this mismatch by directly improving the embedding model, but fine-tuning large embedding models to better follow such instructions is computationally expensive, memory-intensive, and operationally burdensome. To address this challenge, we propose Efficient Retrieval Adapter (ERA), a label-efficient framework that trains retrieval adapters in two stages: self-supervised alignment and supervised adaptation. Inspired by the pre-training and supervised fine-tuning stages of LLMs, ERA first aligns the embedding spaces of a large query embedder and a lightweight document embedder, and then uses limited labeled data to adapt the query-side representation, bridging both the representation gap between embedding models and the semantic gap between complex queries and simple documents without re-indexing the corpus. Experiments on the MAIR benchmark, spanning 126 retrieval tasks across 6 domains, show that ERA improves retrieval in low-label settings, outperforms methods that rely on larger amounts of labeled data, and effectively combines stronger query embedders with weaker document embedders across domains.
Problem

Research questions and friction points this paper is trying to address.

dense retrieval
instruction-following
retrieval mismatch
embedding models
asymmetric queries
Innovation

Methods, ideas, or system contributions that make the work stand out.

retrieval adapter
dense retrieval
embedding alignment
label-efficient learning
asymmetric encoding
🔎 Similar Papers
No similar papers found.