π€ AI Summary
Existing evaluation benchmarks for financial AI are predominantly limited to monolingual, text-only settings and single-task scenarios, failing to comprehensively assess modelsβ multilingual and multimodal capabilities. To address this gap, this work proposes the first comprehensive multilingual and multimodal evaluation framework tailored for financial large language models. The framework introduces three interrelated tasks designed to systematically evaluate model performance in financial understanding, reasoning, and decision-making. Integrating multilingual NLP, multimodal fusion, and decision modeling techniques, the study constructs standardized evaluation tasks and a high-quality dataset, both of which are publicly released to support reproducible research. This initiative establishes a unified, cross-lingual, and cross-modal benchmark for advancing globally inclusive, transparent, and robust financial intelligence systems.
π Abstract
We present the setup and the tasks of the FinMMEval Lab at CLEF 2026, which introduces the first multilingual and multimodal evaluation framework for financial Large Language Models (LLMs). While recent advances in financial natural language processing have enabled automated analysis of market reports, regulatory documents, and investor communications, existing benchmarks remain largely monolingual, text-only, and limited to narrow subtasks. FinMMEval 2026 addresses this gap by offering three interconnected tasks that span financial understanding, reasoning, and decision-making: Financial Exam Question Answering, Multilingual Financial Question Answering (PolyFiQA), and Financial Decision Making. Together, these tasks provide a comprehensive evaluation suite that measures models'ability to reason, generalize, and act across diverse languages and modalities. The lab aims to promote the development of robust, transparent, and globally inclusive financial AI systems, with datasets and evaluation resources publicly released to support reproducible research.