🤖 AI Summary
The rise of large language models (LLMs) has spurred a new retrieval paradigm wherein autonomous agents generate queries, documents, and perform ranking—challenging the classical single-agent assumption in information retrieval (IR). This work introduces a multi-agent perspective, proposing the first unified framework that models the co-evolutionary dynamics among query agents, document agents, and ranking agents, thereby redefining IR’s theoretical foundations and evaluation methodology. Leveraging configurable LLM-based agents, controlled-variable experiments, and cross-agent behavioral analysis, we identify three decisive factors for end-to-end retrieval performance: agent role specialization, coupling between generation quality across agents, and feedback delay in inter-agent interaction. Our findings extend foundational IR assumptions and establish novel design principles—grounded in empirical evidence—for next-generation retrieval systems that are both interpretable and collaboratively adaptive.
📝 Abstract
The rise of large language models (LLMs) has introduced a new era in information retrieval (IR), where queries and documents that were once assumed to be generated exclusively by humans can now also be created by automated agents. These agents can formulate queries, generate documents, and perform ranking. This shift challenges some long-standing IR paradigms and calls for a reassessment of both theoretical frameworks and practical methodologies. We advocate for a multi-agent perspective to better capture the complex interactions between query agents, document agents, and ranker agents. Through empirical exploration of various multi-agent retrieval settings, we reveal the significant impact of these interactions on system performance. Our findings underscore the need to revisit classical IR paradigms and develop new frameworks for more effective modeling and evaluation of modern retrieval systems.