FinAgentBench: A Benchmark Dataset for Agentic Retrieval in Financial Question Answering

📅 2025-08-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing financial retrieval methods struggle to simultaneously achieve semantic matching, document structure understanding, and domain-specific knowledge reasoning, while lacking benchmarks for evaluating multi-step reasoning capabilities. Method: We propose “agent-based retrieval”—a paradigm that decomposes retrieval into two quantifiable, sequential reasoning stages—and introduce FinAgentBench, the first large-scale benchmark for financial multi-step reasoning retrieval, comprising 3,429 expert-annotated instances covering document-type identification and key-passage localization. Our evaluation framework is designed to accommodate context-length constraints, and we integrate sparse-dense hybrid retrieval with LLM-driven multi-step reasoning, validated via targeted fine-tuning. Contribution/Results: FinAgentBench fills a critical gap in evaluating financial retrieval agents, providing an open-source dataset, standardized metrics, and empirical evidence that agent-based decomposition and domain-adaptive fine-tuning significantly improve retrieval performance.

Technology Category

Application Category

📝 Abstract
Accurate information retrieval (IR) is critical in the financial domain, where investors must identify relevant information from large collections of documents. Traditional IR methods-whether sparse or dense-often fall short in retrieval accuracy, as it requires not only capturing semantic similarity but also performing fine-grained reasoning over document structure and domain-specific knowledge. Recent advances in large language models (LLMs) have opened up new opportunities for retrieval with multi-step reasoning, where the model ranks passages through iterative reasoning about which information is most relevant to a given query. However, there exists no benchmark to evaluate such capabilities in the financial domain. To address this gap, we introduce FinAgentBench, the first large-scale benchmark for evaluating retrieval with multi-step reasoning in finance -- a setting we term agentic retrieval. The benchmark consists of 3,429 expert-annotated examples on S&P-100 listed firms and assesses whether LLM agents can (1) identify the most relevant document type among candidates, and (2) pinpoint the key passage within the selected document. Our evaluation framework explicitly separates these two reasoning steps to address context limitations. This design enables to provide a quantitative basis for understanding retrieval-centric LLM behavior in finance. We evaluate a suite of state-of-the-art models and further demonstrated how targeted fine-tuning can significantly improve agentic retrieval performance. Our benchmark provides a foundation for studying retrieval-centric LLM behavior in complex, domain-specific tasks for finance. We will release the dataset publicly upon acceptance of the paper and plan to expand and share dataset for the full S&P 500 and beyond.
Problem

Research questions and friction points this paper is trying to address.

Lack of benchmark for multi-step reasoning retrieval in finance
Evaluating LLM agents' document type identification and key passage extraction
Assessing retrieval accuracy with fine-grained reasoning in financial domain
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces FinAgentBench benchmark for financial multi-step reasoning
Evaluates LLM agents on document type identification and passage pinpointing
Uses targeted fine-tuning to improve agentic retrieval performance
🔎 Similar Papers
No similar papers found.
C
Chanyeol Choi
LinqAlpha, United States
Jihoon Kwon
Jihoon Kwon
Seoul National University / Hanwha systems
Radar signal processingRadar machine learningTracking filterMicrowave applications
Alejandro Lopez-Lira
Alejandro Lopez-Lira
Assistant Professor of Finance, University of Florida
FintechMachine LearningAsset PricingMacro FinancePrivate Equity
C
Chaewoon Kim
LinqAlpha, United States
M
Minjae Kim
LinqAlpha, United States
J
Juneha Hwang
LinqAlpha, United States
Jaeseon Ha
Jaeseon Ha
LinqAlpha
AIfundamental research
H
Hojun Choi
LinqAlpha, United States
S
Suyeol Yun
LinqAlpha, United States
Yongjin Kim
Yongjin Kim
LinqAlpha, United States
Y
Yongjae Lee
UNIST, Republic of Korea