Evaluating Cross-Modal Reasoning Ability and Problem Characteristics with Multimodal Item Response Theory

📅 2026-03-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current multimodal large language model (MLLM) evaluation benchmarks suffer from a prevalence of “shortcut questions” that can be answered using only a single modality, undermining the reliable and efficient assessment of genuine cross-modal reasoning capabilities. To address this, this work proposes the Multimodal Multidimensional Item Response Theory framework (M3IRT), which extends classical Item Response Theory (IRT) to the multimodal setting by decoupling model ability and item difficulty into three distinct dimensions: visual, textual, and cross-modal. M3IRT enables precise modeling of cross-modal reasoning and effectively identifies and filters out shortcut questions. Experiments across three benchmarks with 24 models demonstrate that M3IRT can extract compact, high-quality evaluation subsets from datasets containing up to 50% low-quality items, significantly improving assessment efficiency and reliability while preserving rank consistency among models.

Technology Category

Application Category

📝 Abstract
Multimodal Large Language Models (MLLMs) have recently emerged as general architectures capable of reasoning over diverse modalities. Benchmarks for MLLMs should measure their ability for cross-modal integration. However, current benchmarks are filled with shortcut questions, which can be solved using only a single modality, thereby yielding unreliable rankings. For example, in vision-language cases, we can find the correct answer without either the image or the text. These low-quality questions unnecessarily increase the size and computational requirements of benchmarks. We introduce a multi-modal and multidimensional item response theory framework (M3IRT) that extends classical IRT by decomposing both model ability and item difficulty into image-only, text-only, and cross-modal components. M3IRT estimates cross-modal ability of MLLMs and each question's cross-modal difficulty, enabling compact, high-quality subsets that better reflect multimodal reasoning. Across 24 VLMs on three benchmarks, M3IRT prioritizes genuinely cross-modal questions over shortcuts and preserves ranking fidelity even when 50% of items are artificially generated low-quality questions, thereby reducing evaluation cost while improving reliability. M3IRT thus offers a practical tool for assessing cross-modal reasoning and refining multimodal benchmarks.
Problem

Research questions and friction points this paper is trying to address.

cross-modal reasoning
multimodal benchmarks
shortcut questions
item response theory
multimodal evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multimodal Item Response Theory
Cross-Modal Reasoning
MLLM Evaluation
Benchmark Refinement
Multimodal Benchmarking
🔎 Similar Papers
No similar papers found.