All-in-one: Understanding and Generation in Multimodal Reasoning with the MAIA Benchmark

📅 2025-02-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of fine-grained, culturally adapted evaluation benchmarks for Vision-Language Models (VLMs) on Italian video understanding tasks. To this end, we introduce MAIA—the first multimodal reasoning benchmark for Italian video understanding. MAIA jointly evaluates VLMs on video statement verification and open-ended video question answering, assessing both visual grounding and generative capabilities. It proposes twelve disentangled reasoning categories that explicitly distinguish requirements for unimodal sufficiency, cross-modal necessity, and temporal completeness. MAIA uniquely integrates natively produced Italian video data, human-annotated fine-grained labels, and a consistency-aware joint evaluation metric. Experiments reveal pronounced modality-dependent biases and temporal reasoning deficiencies in current VLMs when applied to culture-specific scenarios. By providing a reproducible, decomposable evaluation framework, MAIA establishes a new paradigm for multilingual and multicultural VLM assessment.

Technology Category

Application Category

📝 Abstract
We introduce MAIA (Multimodal AI Assessment), a native-Italian benchmark designed for fine-grained investigation of the reasoning abilities of visual language models on videos. MAIA differs from other available video benchmarks for its design, its reasoning categories, the metric it uses and the language and culture of the videos. It evaluates Vision Language Models (VLMs) on two aligned tasks: a visual statement verification task, and an open-ended visual question-answering task, both on the same set of video-related questions. It considers twelve reasoning categories that aim to disentangle language and vision relations by highlight when one of two alone encodes sufficient information to solve the tasks, when they are both needed and when the full richness of the short video is essential instead of just a part of it. Thanks to its carefully taught design, it evaluates VLMs' consistency and visually grounded natural language comprehension and generation simultaneously through an aggregated metric. Last but not least, the video collection has been carefully selected to reflect the Italian culture and the language data are produced by native-speakers.
Problem

Research questions and friction points this paper is trying to address.

Evaluates Vision Language Models on video tasks
Disentangles language and vision relations in reasoning
Assesses models' consistency and language comprehension simultaneously
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multimodal AI Assessment benchmark
Visual statement verification task
Open-ended visual question-answering task
🔎 Similar Papers
No similar papers found.