🤖 AI Summary
Existing natural language image editing methods struggle to model complex object occlusions and fine-grained spatial relationships, primarily due to the absence of explicit, multimodal reasoning mechanisms. To address this, we propose MURE—a novel framework introducing interleaved text-image chain-of-thought (CoT) reasoning, which decomposes editing tasks into iterative, multimodal subtasks. MURE incorporates visual cue-guided pixel-level generation, tree-structured path search, and a deep confidence-based reward model to prune spurious reasoning paths and mitigate hallucination. The method unifies large multimodal models, positional masking generation, and visual content representation into an end-to-end joint reasoning system. Evaluated on three mainstream benchmarks, MURE significantly outperforms state-of-the-art methods. Furthermore, we release CoT-Edit-14K—the first large-scale, human-annotated text-image CoT editing dataset—comprising 14K diverse, reasoning-intensive editing instances, thereby advancing practical multimodal reasoning for image editing.
📝 Abstract
Image editing with natural language has gained significant popularity, yet existing methods struggle with intricate object intersections and fine-grained spatial relationships due to the lack of an explicit reasoning process. While Chain-of-Thought (CoT) has been explored to enhance reasoning, purely textual CoT or CoT augmented with coordinate information is fundamentally limited in its ability to represent intricate visual layouts and lacks the necessary visual cues to guide the generation of fine-grained, pixel-level details. To address these challenges, we propose Multimodal Reasoning Edit (MURE), a novel framework that shifts the visual editing process from purely text-based reasoning to a series of interleaved textual and visual rationales. Our framework performs image editing using a natively multimodal, interleaved text-image CoT. This approach generates a step-by-step chain of reasoning where a textual description is followed by a corresponding visual cue, such as a positional mask that defined intended edited regions or a representation of new content. Furthermore, to mitigate the hallucination phenomenon of large language models, we introduce Multimodal Deep Confidence (MMDC) reasoning paradigm. This paradigm explores a tree of visual reasoning paths at each step. By pruning low-quality branches using a deep confidence score from a reward model, it ensures the model consistently follows a high-quality trajectory towards the final edited result. The proposed method decomposes complex editing tasks into interdependent sub-tasks, achieving greater precision at each stage and yielding high-fidelity edited results. We define the formulation for interleaved text-image chains and release the first CoT-Edit-14K dataset, comprising 14K high-quality editing examples. Extensive experiments show that our method yields significant improvements across three image editing benchmarks.