The CLEF-2026 FinMMEval Lab: Multilingual and Multimodal Evaluation of Financial AI Systems

πŸ“… 2026-02-11
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing evaluation benchmarks for financial AI are predominantly limited to monolingual, text-only settings and single-task scenarios, failing to comprehensively assess models’ multilingual and multimodal capabilities. To address this gap, this work proposes the first comprehensive multilingual and multimodal evaluation framework tailored for financial large language models. The framework introduces three interrelated tasks designed to systematically evaluate model performance in financial understanding, reasoning, and decision-making. Integrating multilingual NLP, multimodal fusion, and decision modeling techniques, the study constructs standardized evaluation tasks and a high-quality dataset, both of which are publicly released to support reproducible research. This initiative establishes a unified, cross-lingual, and cross-modal benchmark for advancing globally inclusive, transparent, and robust financial intelligence systems.

Technology Category

Application Category

πŸ“ Abstract
We present the setup and the tasks of the FinMMEval Lab at CLEF 2026, which introduces the first multilingual and multimodal evaluation framework for financial Large Language Models (LLMs). While recent advances in financial natural language processing have enabled automated analysis of market reports, regulatory documents, and investor communications, existing benchmarks remain largely monolingual, text-only, and limited to narrow subtasks. FinMMEval 2026 addresses this gap by offering three interconnected tasks that span financial understanding, reasoning, and decision-making: Financial Exam Question Answering, Multilingual Financial Question Answering (PolyFiQA), and Financial Decision Making. Together, these tasks provide a comprehensive evaluation suite that measures models'ability to reason, generalize, and act across diverse languages and modalities. The lab aims to promote the development of robust, transparent, and globally inclusive financial AI systems, with datasets and evaluation resources publicly released to support reproducible research.
Problem

Research questions and friction points this paper is trying to address.

multilingual
multimodal
financial AI
evaluation benchmark
Large Language Models
Innovation

Methods, ideas, or system contributions that make the work stand out.

multilingual
multimodal
financial LLMs
evaluation framework
cross-lingual reasoning
πŸ”Ž Similar Papers
No similar papers found.
Zhuohan Xie
Zhuohan Xie
MBZUAI
Financial AIReasoningNatural Language ProcessingComputational LinguisticsDeep Learning
R
Rania Elbadry
MBZUAI, Abu Dhabi, UAE
F
Fan Zhang
The University of Tokyo, Tokyo, Japan
G
Georgi Georgiev
Sofia University "St. Kliment Ohridski", Sofia, Bulgaria
Xueqing Peng
Xueqing Peng
Yale University
Lingfei Qian
Lingfei Qian
Yale University
Jimin Huang
Jimin Huang
The Fin AI
computational finance
D
Dimitar Dimitrov
Sofia University "St. Kliment Ohridski", Sofia, Bulgaria
V
Vanshikaa Jani
University of Arizona, Tucson, USA
Y
Yuyang Dai
INSAIT, Sofia University "St. Kliment Ohridski", Sofia, Bulgaria
Jiahui Geng
Jiahui Geng
Mohamed bin Zayed University of Artificial Intelligence
Artificial IntelligenceNatural Language Processing
Yuxia Wang
Yuxia Wang
MBZUAI
Natural Language Processing
I
Ivan Koychev
Sofia University "St. Kliment Ohridski", Sofia, Bulgaria
Veselin Stoyanov
Veselin Stoyanov
Tome AI
Natural Language ProcessingMachine LearningStructured PredictionInformation Extraction
Preslav Nakov
Preslav Nakov
Mohamed bin Zayed University of Artificial Intelligence (MBZUAI)
Computational LinguisticsLarge Language ModelsFact-checkingFake News