🤖 AI Summary
Existing financial retrieval methods struggle to simultaneously achieve semantic matching, document structure understanding, and domain-specific knowledge reasoning, while lacking benchmarks for evaluating multi-step reasoning capabilities. Method: We propose “agent-based retrieval”—a paradigm that decomposes retrieval into two quantifiable, sequential reasoning stages—and introduce FinAgentBench, the first large-scale benchmark for financial multi-step reasoning retrieval, comprising 3,429 expert-annotated instances covering document-type identification and key-passage localization. Our evaluation framework is designed to accommodate context-length constraints, and we integrate sparse-dense hybrid retrieval with LLM-driven multi-step reasoning, validated via targeted fine-tuning. Contribution/Results: FinAgentBench fills a critical gap in evaluating financial retrieval agents, providing an open-source dataset, standardized metrics, and empirical evidence that agent-based decomposition and domain-adaptive fine-tuning significantly improve retrieval performance.
📝 Abstract
Accurate information retrieval (IR) is critical in the financial domain, where investors must identify relevant information from large collections of documents. Traditional IR methods-whether sparse or dense-often fall short in retrieval accuracy, as it requires not only capturing semantic similarity but also performing fine-grained reasoning over document structure and domain-specific knowledge. Recent advances in large language models (LLMs) have opened up new opportunities for retrieval with multi-step reasoning, where the model ranks passages through iterative reasoning about which information is most relevant to a given query. However, there exists no benchmark to evaluate such capabilities in the financial domain. To address this gap, we introduce FinAgentBench, the first large-scale benchmark for evaluating retrieval with multi-step reasoning in finance -- a setting we term agentic retrieval. The benchmark consists of 3,429 expert-annotated examples on S&P-100 listed firms and assesses whether LLM agents can (1) identify the most relevant document type among candidates, and (2) pinpoint the key passage within the selected document. Our evaluation framework explicitly separates these two reasoning steps to address context limitations. This design enables to provide a quantitative basis for understanding retrieval-centric LLM behavior in finance. We evaluate a suite of state-of-the-art models and further demonstrated how targeted fine-tuning can significantly improve agentic retrieval performance. Our benchmark provides a foundation for studying retrieval-centric LLM behavior in complex, domain-specific tasks for finance. We will release the dataset publicly upon acceptance of the paper and plan to expand and share dataset for the full S&P 500 and beyond.