Towards Better Instruction Following Retrieval Models

๐Ÿ“… 2025-05-27
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Current retrieval models are trained solely on <query, passage> pairs, limiting their ability to comprehend and execute explicit user instructions. To address this, we introduce InF-IRโ€”the first large-scale, high-quality instruction-following retrieval corpus comprising over 38K triplets (<instruction, query, passage>)โ€”designed to enable instruction-aware embedding retrieval for encoder-only models. We propose a novel instruction-query joint attention mechanism and a dual-path hard negative sampling strategy that mitigates both instruction and query contamination. Furthermore, we employ o3-mini-based reasoning verification to enhance negative sample quality. Evaluated on five instruction-following retrieval benchmarks, our method achieves an 8.1% relative improvement in p-MRR over the state of the art, significantly advancing lightweight, end-to-end instruction-aligned retrieval.

Technology Category

Application Category

๐Ÿ“ Abstract
Modern information retrieval (IR) models, trained exclusively on standardpairs, struggle to effectively interpret and follow explicit user instructions. We introduce InF-IR, a large-scale, high-quality training corpus tailored for enhancing retrieval models in Instruction-Following IR. InF-IR expands traditional training pairs into over 38,000 expressivetriplets as positive samples. In particular, for each positive triplet, we generate two additional hard negative examples by poisoning both instructions and queries, then rigorously validated by an advanced reasoning model (o3-mini) to ensure semantic plausibility while maintaining instructional incorrectness. Unlike existing corpora that primarily support computationally intensive reranking tasks for decoder-only language models, the highly contrastive positive-negative triplets in InF-IR further enable efficient representation learning for smaller encoder-only models, facilitating direct embedding-based retrieval. Using this corpus, we train InF-Embed, an instruction-aware Embedding model optimized through contrastive learning and instruction-query attention mechanisms to align retrieval outcomes precisely with user intents. Extensive experiments across five instruction-based retrieval benchmarks demonstrate that InF-Embed significantly surpasses competitive baselines by 8.1% in p-MRR, measuring the instruction-following capabilities.
Problem

Research questions and friction points this paper is trying to address.

Enhancing retrieval models to follow user instructions effectively
Creating a large-scale training corpus for instruction-aware IR
Improving embedding-based retrieval with contrastive learning and attention
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large-scale instruction-aware training corpus InF-IR
Hard negative examples via poisoned instructions
Contrastive learning with instruction-query attention
๐Ÿ”Ž Similar Papers
No similar papers found.