🤖 AI Summary
To address pervasive visual hallucinations in large multimodal language models (MLLMs) during visual spatial reasoning, this paper introduces “Grounded Chain-of-Thought” (GCoT)—a novel task requiring models to iteratively identify, localize, and ground visual cues in image coordinates prior to answer generation. We formally define the GCoT learning paradigm, construct the first large-scale MM-GCoT benchmark (24,022 samples), and propose a three-dimensional evaluation framework measuring answer accuracy, localization accuracy, and answer-localization consistency. Our method employs coordinate-level visual grounding modeling coupled with consistency-driven joint fine-tuning. Extensive experiments across 12 state-of-the-art MLLMs demonstrate significant suppression of visual hallucinations. Moreover, GCoT capability transfers effectively to downstream tasks—including open-ended visual question answering and referring expression comprehension—achieving up to a 37.2% improvement in answer-localization consistency.
📝 Abstract
Despite great progress, existing multimodal large language models (MLLMs) are prone to visual hallucination, greatly impeding their trustworthy applications. In this paper, we study this problem from the perspective of visual-spatial reasoning, and propose a new learning task for MLLMs, termed Grounded Chain-of-Thought (GCoT). Different from recent visual CoT studies, which focus more on visual knowledge reasoning, GCoT is keen to helping MLLMs to recognize and ground the relevant visual cues step by step, thereby predicting the correct answer with grounding coordinates as the intuitive basis. To facilitate this task, we also carefully design and construct a dataset called multimodal grounded chain-of-thought (MM-GCoT) consisting of 24,022 GCoT examples for 5,033 images. Besides, a comprehensive consistency evaluation system is also introduced, including the metrics of answer accuracy, grounding accuracy and answer-grounding consistency. We further design and conduct a bunch of experiments on 12 advanced MLLMs, and reveal some notable findings: i. most MLLMs performs poorly on the consistency evaluation, indicating obvious visual hallucination; ii. visual hallucination is not directly related to the parameter size and general multimodal performance, i.e., a larger and stronger MLLM is not less affected by this issue. Lastly, we also demonstrate that the proposed dataset can help existing MLLMs to well cultivate their GCoT capability and reduce the inconsistent answering significantly. Moreover, their GCoT can be also generalized to exiting multimodal tasks, such as open-world QA and REC.