FinSearchComp: Towards a Realistic, Expert-Level Evaluation of Financial Search and Reasoning

📅 2025-09-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing financial search and reasoning benchmarks lack open, realistic, expert-validated datasets, hindering end-to-end agent evaluation. Method: We introduce FinSearchComp—the first open-source benchmark for financial search agents—comprising 635 real-world queries across global and Greater China markets, categorized into timely data acquisition, simple historical lookup, and complex historical investigation tasks. It fully emulates financial analyst workflows and features rigorous annotation by 70 domain experts with multi-stage quality control. Leveraging LLM-based agent architectures, it integrates web search and finance-specific plugins to support multi-step reasoning. Contribution/Results: FinSearchComp enables the first realism-level, end-to-end evaluation of financial search agents. Comprehensive evaluation across 21 mainstream models reveals Grok-4 (web) achieves near-expert performance on the global subset, while DouBao (web) excels in the Greater China subset—demonstrating the critical roles of tool augmentation and regional adaptation.

Technology Category

Application Category

📝 Abstract
Search has emerged as core infrastructure for LLM-based agents and is widely viewed as critical on the path toward more general intelligence. Finance is a particularly demanding proving ground: analysts routinely conduct complex, multi-step searches over time-sensitive, domain-specific data, making it ideal for assessing both search proficiency and knowledge-grounded reasoning. Yet no existing open financial datasets evaluate data searching capability of end-to-end agents, largely because constructing realistic, complicated tasks requires deep financial expertise and time-sensitive data is hard to evaluate. We present FinSearchComp, the first fully open-source agent benchmark for realistic, open-domain financial search and reasoning. FinSearchComp comprises three tasks -- Time-Sensitive Data Fetching, Simple Historical Lookup, and Complex Historical Investigation -- closely reproduce real-world financial analyst workflows. To ensure difficulty and reliability, we engage 70 professional financial experts for annotation and implement a rigorous multi-stage quality-assurance pipeline. The benchmark includes 635 questions spanning global and Greater China markets, and we evaluate 21 models (products) on it. Grok 4 (web) tops the global subset, approaching expert-level accuracy. DouBao (web) leads on the Greater China subset. Experimental analyses show that equipping agents with web search and financial plugins substantially improves results on FinSearchComp, and the country origin of models and tools impact performance significantly.By aligning with realistic analyst tasks and providing end-to-end evaluation, FinSearchComp offers a professional, high-difficulty testbed for complex financial search and reasoning.
Problem

Research questions and friction points this paper is trying to address.

Evaluating financial search and reasoning capabilities of AI agents
Assessing multi-step search over time-sensitive financial data
Benchmarking agent performance against expert-level financial analysis
Innovation

Methods, ideas, or system contributions that make the work stand out.

Open-source benchmark for financial search
Multi-stage quality assurance with experts
Web search and financial plugins integration
🔎 Similar Papers
No similar papers found.