Revisiting Text Ranking in Deep Research

📅 2026-02-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the challenge posed by black-box web search APIs, which hinder systematic evaluation of retrieval components in deep research settings and obscure the true effectiveness of existing text ranking methods. To overcome this limitation, the authors present the first systematic reproduction and evaluation of text ranking approaches within a fixed-corpus deep research environment. Through comprehensive experiments on BrowseComp-Plus involving two open-source agents, five retrievers, and three rerankers, they investigate the impact of retrieval units, pipeline configurations, and query characteristics. Their findings reveal that agent-generated queries favor lexical, sparse, and multi-vector retrievers; paragraph-level retrieval proves more efficient; reranking substantially improves performance; and reformulating agent-generated queries into natural language questions effectively mitigates the distributional mismatch between queries and the training data of ranking models.

Technology Category

Application Category

📝 Abstract
Deep research has emerged as an important task that aims to address hard queries through extensive open-web exploration. To tackle it, most prior work equips large language model (LLM)-based agents with opaque web search APIs, enabling agents to iteratively issue search queries, retrieve external evidence, and reason over it. Despite search's essential role in deep research, black-box web search APIs hinder systematic analysis of search components, leaving the behaviour of established text ranking methods in deep research largely unclear. To fill this gap, we reproduce a selection of key findings and best practices for IR text ranking methods in the deep research setting. In particular, we examine their effectiveness from three perspectives: (i) retrieval units (documents vs. passages), (ii) pipeline configurations (different retrievers, re-rankers, and re-ranking depths), and (iii) query characteristics (the mismatch between agent-issued queries and the training queries of text rankers). We perform experiments on BrowseComp-Plus, a deep research dataset with a fixed corpus, evaluating 2 open-source agents, 5 retrievers, and 3 re-rankers across diverse setups. We find that agent-issued queries typically follow web-search-style syntax (e.g., quoted exact matches), favouring lexical, learned sparse, and multi-vector retrievers; passage-level units are more efficient under limited context windows, and avoid the difficulties of document length normalisation in lexical retrieval; re-ranking is highly effective; translating agent-issued queries into natural-language questions significantly bridges the query mismatch.
Problem

Research questions and friction points this paper is trying to address.

text ranking
deep research
query mismatch
retrieval units
re-rankers
Innovation

Methods, ideas, or system contributions that make the work stand out.

text ranking
deep research
retrieval units
query mismatch
re-ranker
🔎 Similar Papers
No similar papers found.