CoT Referring: Improving Referring Expression Tasks with Grounded Reasoning

๐Ÿ“… 2025-10-03
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the limited cross-modal reasoning capability of multimodal large language models (MLLMs) in referring expression comprehension and segmentation. To this end, we propose CoT Referring, a novel method that constructs structured chain-of-thought (CoT) training data to decompose textual referring expressions into verifiable, sequential reasoning stepsโ€”ensuring accurate relational identification and referent consistency at each step. Additionally, we design a unified detection-and-segmentation modeling framework with an adaptive weighted loss function. To rigorously evaluate performance in complex referring scenarios, we introduce a dedicated benchmark. Extensive experiments on standard benchmarks (RefCOCO, RefCOCO+, RefCOCOg) and our new benchmark demonstrate that CoT Referring consistently outperforms strong baselines, achieving an average improvement of over 2.5% in segmentation accuracy. These results validate the effectiveness of structured cross-modal reasoning for handling intricate referring expressions.

Technology Category

Application Category

๐Ÿ“ Abstract
Referring Expression Comprehension and Segmentation are critical tasks for assessing the integration of language understanding and image comprehension, serving as benchmarks for Multimodal Large Language Models (MLLMs) capabilities. To address these challenges, we propose a new strategy, CoT Referring, which enhances model reasoning across modalities through a structured, chain-of-thought training data structure. Our approach systematically parses textual structures to a sequential referring step, where in each step it identifies relationships and ensures consistent reference alignment, thereby improving accuracy in complex query scenarios. We restructure the training data to enforce a new output form, providing new annotations for existing datasets and compiling an evaluation benchmark from existing resources. This benchmark is designed explicitly for complex referring cases. We also integrate detection and segmentation capabilities into a unified MLLM framework, training it with a novel adaptive weighted loss to optimize performance. Experimental results on our curated benchmark and RefCOCO/+/g demonstrate the effectiveness of our approach, with a notable increase of 2.5%+ over baseline models.
Problem

Research questions and friction points this paper is trying to address.

Enhancing multimodal reasoning for referring expression tasks
Improving accuracy in complex visual-language query scenarios
Unifying detection and segmentation in multimodal language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Chain-of-thought training structure for multimodal reasoning
Restructured training data with sequential referring annotations
Unified MLLM framework with adaptive weighted loss
๐Ÿ”Ž Similar Papers
No similar papers found.