CoT-Seg: Rethinking Segmentation with Chain-of-Thought Reasoning and Self-Correction

📅 2026-01-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge that existing referring segmentation methods struggle to accurately interpret implicit semantics and produce reliable masks when handling complex queries or out-of-domain images. To this end, we propose the first training-free referring segmentation framework that uniquely integrates Chain-of-Thought (CoT) reasoning with a self-correction mechanism. Leveraging a multimodal large language model (GPT-4o), our approach decomposes complex queries into meta-instructions and iteratively performs fine-grained semantic extraction, mask generation, and self-evaluation, while also supporting retrieval-augmented reasoning. Evaluated on a newly curated, challenging benchmark—ReasonSeg-Hard—our method demonstrates significant improvements in segmentation accuracy and robustness, validating the effectiveness and potential of the proposed paradigm for vision-language-driven segmentation.

Technology Category

Application Category

📝 Abstract
Existing works of reasoning segmentation often fall short in complex cases, particularly when addressing complicated queries and out-of-domain images. Inspired by the chain-of-thought reasoning, where harder problems require longer thinking steps/time, this paper aims to explore a system that can think step-by-step, look up information if needed, generate results, self-evaluate its own results, and refine the results, in the same way humans approach harder questions. We introduce CoT-Seg, a training-free framework that rethinks reasoning segmentation by combining chain-of-thought reasoning with self-correction. Instead of fine-tuning, CoT-Seg leverages the inherent reasoning ability of pre-trained MLLMs (GPT-4o) to decompose queries into meta-instructions, extract fine-grained semantics from images, and identify target objects even under implicit or complex prompts. Moreover, CoT-Seg incorporates a self-correction stage: the model evaluates its own segmentation against the original query and reasoning trace, identifies mismatches, and iteratively refines the mask. This tight integration of reasoning and correction significantly improves reliability and robustness, especially in ambiguous or error-prone cases. Furthermore, our CoT-Seg framework allows easy incorporation of retrieval-augmented reasoning, enabling the system to access external knowledge when the input lacks sufficient information. To showcase CoT-Seg's ability to handle very challenging cases ,we introduce a new dataset ReasonSeg-Hard. Our results highlight that combining chain-of-thought reasoning, self-correction, offers a powerful paradigm for vision-language integration driven segmentation.
Problem

Research questions and friction points this paper is trying to address.

reasoning segmentation
complex queries
out-of-domain images
segmentation robustness
vision-language integration
Innovation

Methods, ideas, or system contributions that make the work stand out.

Chain-of-Thought Reasoning
Self-Correction
Reasoning Segmentation
Retrieval-Augmented Reasoning
Training-Free Framework
🔎 Similar Papers
No similar papers found.
Shiu-hong Kao
Shiu-hong Kao
HKUST
Deep LearningComputer Vision
C
Chak Ho Huang
The Hong Kong University of Science and Technology
H
Huaiqian Liu
The Hong Kong University of Science and Technology
Yu-Wing Tai
Yu-Wing Tai
Dartmouth College
Computer VisionDeep LearningMulti-modalities Generative AI
C
Chi-Keung Tang
The Hong Kong University of Science and Technology