๐ค AI Summary
Existing image editing systems struggle to accurately interpret complex, indirect, or multi-step user instructions due to limited contextual awareness and goal-directed reasoning. This work proposes the first reinforcement learningโbased multi-agent framework for image editing, formulating the editing process as a sequential decision-making problem. The approach dynamically orchestrates multiple specialized, pre-trained vision-language and diffusion models in an end-to-end manner, enabling collaborative and adaptive editing strategies. By moving beyond conventional monolithic architectures or handcrafted pipelines, the method achieves significant performance gains over leading closed-source diffusion models and existing multi-agent baselines across multiple standard benchmarks, demonstrating superior semantic understanding and generation consistency.
๐ Abstract
With the rapid advancement of commercial multi-modal models, image editing has garnered significant attention due to its widespread applicability in daily life. Despite impressive progress, existing image editing systems, particularly closed-source or proprietary models, often struggle with complex, indirect, or multi-step user instructions. These limitations hinder their ability to perform nuanced, context-aware edits that align with human intent. In this work, we propose ImageEdit-R1, a multi-agent framework for intelligent image editing that leverages reinforcement learning to coordinate high-level decision-making across a set of specialized, pretrained vision-language and generative agents. Each agent is responsible for distinct capabilities--such as understanding user intent, identifying regions of interest, selecting appropriate editing actions, and synthesizing visual content--while reinforcement learning governs their collaboration to ensure coherent and goal-directed behavior. Unlike existing approaches that rely on monolithic models or hand-crafted pipelines, our method treats image editing as a sequential decision-making problem, enabling dynamic and context-aware editing strategies. Experimental results demonstrate that ImageEdit-R1 consistently outperforms both individual closed-source diffusion models and alternative multi-agent framework baselines across multiple image editing datasets.