Can Large Multimodal Models Actively Recognize Faulty Inputs? A Systematic Evaluation Framework of Their Input Scrutiny Ability

📅 2025-08-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study is the first to systematically investigate large multimodal models’ (LMMs’) ability to actively detect erroneous input premises. Addressing the gap in prior work—which largely overlooks LMMs’ “input scrutiny” capability—we propose ISEval, a comprehensive evaluation framework encompassing seven categories of logical and cross-modal premise errors and three quantitative metrics. We benchmark ten state-of-the-art LMMs under this framework. Results reveal that most models fail to identify textual logical flaws without explicit prompting, and exhibit a novel “modality trust imbalance”—e.g., aya-vision-8b over-relies on textual inputs while neglecting visual cues. In contrast, Gemini and Claude demonstrate superior cross-modal collaborative scrutiny. Our methodology integrates controlled error injection, cross-modal conflict construction, and human-annotated validation to enable both quantitative and qualitative analysis. All code and data are publicly released.

Technology Category

Application Category

📝 Abstract
Large Multimodal Models (LMMs) have witnessed remarkable growth, showcasing formidable capabilities in handling intricate multimodal tasks with exceptional performance. Recent research has underscored the inclination of large language models to passively accept defective inputs, often resulting in futile reasoning on invalid prompts. However, the same critical question of whether LMMs can actively detect and scrutinize erroneous inputs still remains unexplored. To address this gap, we introduce the Input Scrutiny Ability Evaluation Framework (ISEval), which encompasses seven categories of flawed premises and three evaluation metrics. Our extensive evaluation of ten advanced LMMs has identified key findings. Most models struggle to actively detect flawed textual premises without guidance, which reflects a strong reliance on explicit prompts for premise error identification. Error type affects performance: models excel at identifying logical fallacies but struggle with surface-level linguistic errors and certain conditional flaws. Modality trust varies-Gemini 2.5 pro and Claude Sonnet 4 balance visual and textual info, while aya-vision-8b over-rely on text in conflicts. These insights underscore the urgent need to enhance LMMs' proactive verification of input validity and shed novel insights into mitigating the problem. The code is available at https://github.com/MLGroupJLU/LMM_ISEval.
Problem

Research questions and friction points this paper is trying to address.

Evaluate LMMs' ability to detect faulty multimodal inputs
Assess error type impact on LMMs' scrutiny performance
Analyze modality trust bias in LMMs' input verification
Innovation

Methods, ideas, or system contributions that make the work stand out.

Input Scrutiny Ability Evaluation Framework (ISEval)
Seven categories of flawed premises
Three evaluation metrics for LMMs
🔎 Similar Papers
No similar papers found.
H
Haiqi Yang
School of Artificial Intelligence, Jilin University
Jinzhe Li
Jinzhe Li
Fudan University & Shanghai AI Lab
AI4ScienceMulti-Modal
G
Gengxu Li
School of Artificial Intelligence, Jilin University
Y
Yi Chang
School of Artificial Intelligence, Jilin University; Engineering Research Center of Knowledge-Driven Human-Machine Intelligence, MOE, China; International Center of Future Science, Jilin University
Y
Yuan Wu
School of Artificial Intelligence, Jilin University; Engineering Research Center of Knowledge-Driven Human-Machine Intelligence, MOE, China; International Center of Future Science, Jilin University