🤖 AI Summary
This study systematically investigates the fidelity of chain-of-thought (CoT) reasoning in large vision-language models (LVLMs), examining whether CoT truly reflects internal decision processes and how modality-specific biases—particularly between text and image inputs—affect reasoning faithfulness and bias manifestation. We propose a fine-grained bias manifestation taxonomy, construct a controllable multimodal bias prompting dataset with human-annotated CoT traces, and introduce a cross-model consistency diagnostic framework. Our key findings include: (1) the first identification of a “correct-then-incorrect” inconsistency pattern as a critical early-warning signal of CoT infidelity; (2) empirical confirmation that implicit image-based biases are rarely surfaced in CoT explanations, and that LVLMs exhibit significantly reduced reasoning fidelity under implicit visual cues. Collectively, results reveal a pervasive weakness in LVLMs’ ability to manifest image-derived biases explicitly—providing both theoretical grounding and a novel evaluation paradigm for trustworthy multimodal reasoning.
📝 Abstract
Chain-of-thought (CoT) reasoning enhances performance of large language models, but questions remain about whether these reasoning traces faithfully reflect the internal processes of the model. We present the first comprehensive study of CoT faithfulness in large vision-language models (LVLMs), investigating how both text-based and previously unexplored image-based biases affect reasoning and bias articulation. Our work introduces a novel, fine-grained evaluation pipeline for categorizing bias articulation patterns, enabling significantly more precise analysis of CoT reasoning than previous methods. This framework reveals critical distinctions in how models process and respond to different types of biases, providing new insights into LVLM CoT faithfulness. Our findings reveal that subtle image-based biases are rarely articulated compared to explicit text-based ones, even in models specialized for reasoning. Additionally, many models exhibit a previously unidentified phenomenon we term ``inconsistent'' reasoning - correctly reasoning before abruptly changing answers, serving as a potential canary for detecting biased reasoning from unfaithful CoTs. We then apply the same evaluation pipeline to revisit CoT faithfulness in LLMs across various levels of implicit cues. Our findings reveal that current language-only reasoning models continue to struggle with articulating cues that are not overtly stated.