🤖 AI Summary
This work addresses the opacity of reasoning processes in multimodal large language models (MLLMs) by identifying a novel diagnostic failure mode—“modality disruption”: high-confidence unimodal errors dominate multimodal fusion decisions, thereby suppressing corroborative evidence from other modalities. To diagnose this phenomenon, we propose a lightweight, model-agnostic framework that treats each modality as an independent agent. The framework integrates candidate label generation, self-evaluation prompting, and aggregation-based fusion to enable interpretable auditing of both modality-specific contributions and disruptive behaviors. Crucially, it is the first method to systematically characterize the dynamics of multimodal conflict. Evaluated on sentiment analysis benchmarks, it effectively disentangles data bias from intrinsic model deficiencies. Our approach provides a transferable, principled diagnostic tool for assessing the reasoning reliability of MLLMs.
📝 Abstract
Despite rapid growth in multimodal large language models (MLLMs), their reasoning traces remain opaque: it is often unclear which modality drives a prediction, how conflicts are resolved, or when one stream dominates. In this paper, we introduce modality sabotage, a diagnostic failure mode in which a high-confidence unimodal error overrides other evidence and misleads the fused result. To analyze such dynamics, we propose a lightweight, model-agnostic evaluation layer that treats each modality as an agent, producing candidate labels and a brief self-assessment used for auditing. A simple fusion mechanism aggregates these outputs, exposing contributors (modalities supporting correct outcomes) and saboteurs (modalities that mislead). Applying our diagnostic layer in a case study on multimodal emotion recognition benchmarks with foundation models revealed systematic reliability profiles, providing insight into whether failures may arise from dataset artifacts or model limitations. More broadly, our framework offers a diagnostic scaffold for multimodal reasoning, supporting principled auditing of fusion dynamics and informing possible interventions.