FinTradeBench: A Financial Reasoning Benchmark for LLMs

📅 2026-03-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitation of existing financial question-answering benchmarks, which predominantly focus on financial statement data and fail to evaluate models’ ability to reason about stock trading signals and their interaction with fundamental company metrics. To bridge this gap, the authors introduce the first systematically integrated benchmark that combines corporate fundamentals and trading signals, comprising 1,400 expert-curated questions across three reasoning tasks: fundamental analysis, trading signal interpretation, and cross-signal hybrid reasoning. A multi-stage framework—incorporating expert guidance, multi-model generation, self-filtering, numerical auditing, and human–AI collaborative evaluation—ensures high data quality and diversity. Experiments across 14 large language models reveal that retrieval substantially enhances performance on fundamental reasoning but offers limited gains for trading signal tasks, highlighting persistent challenges in numerical and time-series reasoning.

Technology Category

Application Category

📝 Abstract
Real-world financial decision-making is a challenging problem that requires reasoning over heterogeneous signals, including company fundamentals derived from regulatory filings and trading signals computed from price dynamics. Recently, with the advancement of Large Language Models (LLMs), financial analysts have begun to use them for financial decision-making tasks. However, existing financial question answering benchmarks for testing these models primarily focus on company balance sheet data and rarely evaluate reasoning over how company stocks trade in the market or their interactions with fundamentals. To take advantage of the strengths of both approaches, we introduce FinTradeBench, a benchmark for evaluating financial reasoning that integrates company fundamentals and trading signals. FinTradeBench contains 1,400 questions grounded in NASDAQ-100 companies over a ten-year historical window. The benchmark is organized into three reasoning categories: fundamentals-focused, trading-signal-focused, and hybrid questions requiring cross-signal reasoning. To ensure reliability at scale, we adopt a calibration-then-scaling framework that combines expert seed questions, multi-model response generation, intra-model self-filtering, numerical auditing, and human-LLM judge alignment. We evaluate 14 LLMs under zero-shot prompting and retrieval-augmented settings and witness a clear performance gap. Retrieval substantially improves reasoning over textual fundamentals, but provides limited benefit for trading-signal reasoning. These findings highlight fundamental challenges in the numerical and time-series reasoning for current LLMs and motivate future research in financial intelligence.
Problem

Research questions and friction points this paper is trying to address.

financial reasoning
trading signals
company fundamentals
LLM benchmark
market interaction
Innovation

Methods, ideas, or system contributions that make the work stand out.

financial reasoning benchmark
trading signals
company fundamentals
cross-signal reasoning
calibration-then-scaling framework
🔎 Similar Papers
No similar papers found.
Y
Yogesh Agrawal
University of Central Florida
A
Aniruddha Dutta
University of Central Florida
M
Md Mahadi Hasan
University of Central Florida
S
Santu Karmaker
University of Central Florida
Aritra Dutta
Aritra Dutta
Assistant Professor, University of Central Florida
OptimizationMachine LearningSignal Processing