Generative Visual Chain-of-Thought for Image Editing

📅 2026-03-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of accurately localizing editing regions in complex scenes under fine-grained spatial instructions, a limitation of existing image editing methods. To this end, the authors propose the Generative Visual Chain-of-Thought (GVCoT) framework, which performs end-to-end native visual reasoning by first generating spatial cues and then executing the edit. Key contributions include the construction of GVCoT-Edit-Instruct, a large-scale instruction dataset comprising 19 task categories and 1.8 million samples; the introduction of SREdit-Bench, a new benchmark for spatially precise image editing; and a progressive training strategy that integrates supervised fine-tuning with reinforcement learning. Experimental results demonstrate that GVCoT significantly outperforms current approaches on both SREdit-Bench and ImgEdit benchmarks, achieving more accurate and interpretable image editing.

Technology Category

Application Category

📝 Abstract
Existing image editing methods struggle to perceive where to edit, especially under complex scenes and nuanced spatial instructions. To address this issue, we propose Generative Visual Chain-of-Thought (GVCoT), a unified framework that performs native visual reasoning by first generating spatial cues to localize the target region and then executing the edit. Unlike prior text-only CoT or tool-dependent visual CoT paradigms, GVCoT jointly optimizes visual tokens generated during the reasoning and editing phases in an end-to-end manner. This way fosters the emergence of innate spatial reasoning ability and enables more effective utilization of visual-domain cues. The main challenge of training GCVoT lies in the scarcity of large-scale editing data with precise edit region annotations; to this end, we construct GVCoT-Edit-Instruct, a dataset of 1.8M high-quality samples spanning 19 tasks. We adopt a progressive training strategy: supervised fine-tuning to build foundational localization ability in reasoning trace before final editing, followed by reinforcement learning to further improve reasoning and editing quality. Finally, we introduce SREdit-Bench, a new benchmark designed to comprehensively stress-test models under sophisticated scenes and fine-grained referring expressions. Experiments demonstrate that GVCoT consistently outperforms state-of-the-art models on SREdit-Bench and ImgEdit. We hope our GVCoT will inspire future research toward interpretable and precise image editing.
Problem

Research questions and friction points this paper is trying to address.

image editing
spatial reasoning
visual chain-of-thought
region localization
complex scenes
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generative Visual Chain-of-Thought
visual reasoning
spatial localization
end-to-end visual token optimization
image editing benchmark
🔎 Similar Papers
No similar papers found.