FinMTM: A Multi-Turn Multimodal Benchmark for Financial Reasoning and Agent Evaluation

📅 2026-02-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing evaluation benchmarks for financial vision-language models are largely confined to single-turn, single-task settings, which fail to adequately assess multimodal reasoning and interactive capabilities in real-world, complex scenarios. To address this limitation, this work introduces a large-scale bilingual (Chinese-English) benchmark that incorporates multi-turn dialogues and agent-based tasks, encompassing diverse financial charts—such as candlestick and statistical plots—and a variety of task types. The study further proposes task-specific evaluation protocols, including set-overlap scoring, turn- and session-weighted metrics, and a composite planning-outcome measure. Systematic evaluation of 22 state-of-the-art vision-language models reveals significant deficiencies in fine-grained visual perception, long-context reasoning, and execution of complex agent workflows.

Technology Category

Application Category

📝 Abstract
The financial domain poses substantial challenges for vision-language models (VLMs) due to specialized chart formats and knowledge-intensive reasoning requirements. However, existing financial benchmarks are largely single-turn and rely on a narrow set of question formats, limiting comprehensive evaluation in realistic application scenarios. To address this gap, we propose FinMTM, a multi-turn multimodal benchmark that expands diversity along both data and task dimensions. On the data side, we curate and annotate 11{,}133 bilingual (Chinese and English) financial QA pairs grounded in financial visuals, including candlestick charts, statistical plots, and report figures. On the task side, FinMTM covers single- and multiple-choice questions, multi-turn open-ended dialogues, and agent-based tasks. We further design task-specific evaluation protocols, including a set-overlap scoring rule for multiple-choice questions, a weighted combination of turn-level and session-level scores for multi-turn dialogues, and a composite metric that integrates planning quality with final outcomes for agent tasks. Extensive experimental evaluation of 22 VLMs reveal their limitations in fine-grained visual perception, long-context reasoning, and complex agent workflows.
Problem

Research questions and friction points this paper is trying to address.

financial reasoning
multimodal benchmark
multi-turn dialogue
vision-language models
agent evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

multi-turn multimodal benchmark
financial reasoning
vision-language models
agent evaluation
task-specific evaluation protocols
🔎 Similar Papers
No similar papers found.