Instruction-based Image Editing with Planning, Reasoning, and Generation

📅 2026-02-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of existing instruction-driven image editing methods, which suffer from constrained unimodal understanding and struggle to achieve high-quality edits in complex scenes. To overcome this, the authors propose a multimodal chain-of-thought framework that decomposes the task into three stages: planning, region reasoning, and generation. Specifically, a large language model parses user instructions and formulates an editing plan, a multimodal large model identifies precise target regions based on this plan, and a prompt-guided diffusion model executes the edit while preserving fine-grained visual details. By integrating complementary multimodal reasoning, the approach transcends the bottlenecks of unimodal understanding and significantly improves both editing accuracy and visual fidelity on real-world, complex images.

Technology Category

Application Category

📝 Abstract
Editing images via instruction provides a natural way to generate interactive content, but it is a big challenge due to the higher requirement of scene understanding and generation. Prior work utilizes a chain of large language models, object segmentation models, and editing models for this task. However, the understanding models provide only a single modality ability, restricting the editing quality. We aim to bridge understanding and generation via a new multi-modality model that provides the intelligent abilities to instruction-based image editing models for more complex cases. To achieve this goal, we individually separate the instruction editing task with the multi-modality chain of thought prompts, i.e., Chain-of-Thought (CoT) planning, editing region reasoning, and editing. For Chain-of-Thought planning, the large language model could reason the appropriate sub-prompts considering the instruction provided and the ability of the editing network. For editing region reasoning, we train an instruction-based editing region generation network with a multi-modal large language model. Finally, a hint-guided instruction-based editing network is proposed for editing image generations based on the sizeable text-to-image diffusion model to accept the hints for generation. Extensive experiments demonstrate that our method has competitive editing abilities on complex real-world images.
Problem

Research questions and friction points this paper is trying to address.

instruction-based image editing
scene understanding
multi-modality
image generation
complex editing
Innovation

Methods, ideas, or system contributions that make the work stand out.

instruction-based image editing
Chain-of-Thought planning
multi-modal reasoning
hint-guided generation
diffusion model
🔎 Similar Papers
No similar papers found.