Analyzing Reasoning Consistency in Large Multimodal Models under Cross-Modal Conflicts

📅 2026-01-07
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the susceptibility of large multimodal models to “textual inertia”—a phenomenon wherein models persistently propagate textual hallucinations despite conflicting visual evidence. The study formally defines this issue and introduces LogicGraph, a perturbation protocol that systematically evaluates a model’s self-reflective capacity under cross-modal conflict. To mitigate this problem, the authors propose an untrained, active visual re-anchoring mechanism coupled with an adaptive context refinement strategy, which dynamically strengthens the influence of visual evidence throughout the reasoning chain. Experimental results demonstrate that this approach significantly suppresses hallucination propagation, achieving markedly higher self-correction success rates in conflicting scenarios compared to existing baselines, thereby enhancing the consistency and robustness of multimodal reasoning.

Technology Category

Application Category

📝 Abstract
Large Multimodal Models (LMMs) have demonstrated impressive capabilities in video reasoning via Chain-of-Thought (CoT). However, the robustness of their reasoning chains remains questionable. In this paper, we identify a critical failure mode termed textual inertia, where once a textual hallucination occurs in the thinking process, models tend to blindly adhere to the erroneous text while neglecting conflicting visual evidence. To systematically investigate this, we propose the LogicGraph Perturbation Protocol that structurally injects perturbations into the reasoning chains of diverse LMMs spanning both native reasoning architectures and prompt-driven paradigms to evaluate their self-reflection capabilities. The results reveal that models successfully self-correct in less than 10% of cases and predominantly succumb to blind textual error propagation. To mitigate this, we introduce Active Visual-Context Refinement, a training-free inference paradigm which orchestrates an active visual re-grounding mechanism to enforce fine-grained verification coupled with an adaptive context refinement strategy to summarize and denoise the reasoning history. Experiments demonstrate that our approach significantly stifles hallucination propagation and enhances reasoning robustness.
Problem

Research questions and friction points this paper is trying to address.

reasoning consistency
multimodal models
textual inertia
cross-modal conflicts
hallucination propagation
Innovation

Methods, ideas, or system contributions that make the work stand out.

textual inertia
LogicGraph Perturbation Protocol
Active Visual-Context Refinement
reasoning robustness
multimodal hallucination
🔎 Similar Papers
No similar papers found.