🤖 AI Summary
Instruction-driven image editing models suffer from overfitting to annotation patterns during supervised fine-tuning, resulting in limited generalization; moreover, a universal, transferable reward model for evaluating editing quality remains lacking. Method: We propose Edit-R1, the first framework to employ a training-free multimodal large language model (MLLM) as a unified, zero-shot reward model, integrated with diffusion-negative-aware fine-tuning and a low-variance grouped filtering mechanism for stable and efficient policy optimization. The framework supports DiffusionNFT, high-order samplers, and implicit feedback, enabling plug-and-play upgrades of foundation models. Contribution/Results: Edit-R1 achieves state-of-the-art scores of 4.49 on ImgEdit and 7.83 on GEdit-Bench, significantly improving both generalization and editing fidelity of models including Qwen-Image-Edit and FLUX-Kontext.
📝 Abstract
Instruction-based image editing has achieved remarkable progress; however, models solely trained via supervised fine-tuning often overfit to annotated patterns, hindering their ability to explore and generalize beyond training distributions. To this end, we introduce Edit-R1, a novel post-training framework for instruction-based image editing based on policy optimization. Specifically, we utilize Diffusion Negative-aware Finetuning (DiffusionNFT), a likelihood-free policy optimization method consistent with the flow matching forward process, thereby enabling the use of higher-order samplers and more efficient training. Another key challenge here is the absence of a universal reward model, resulting from the diverse nature of editing instructions and tasks. To bridge this gap, we employ a Multimodal Large Language Model (MLLM) as a unified, training-free reward model, leveraging its output logits to provide fine-grained feedback. Furthermore, we carefully design a low-variance group filtering mechanism to reduce MLLM scoring noise and stabilize optimization. UniWorld-V2, trained with this framework, achieves extbf{state-of-the-art} results on the ImgEdit and GEdit-Bench benchmarks, scoring 4.49 and 7.83, respectively. Crucially, our framework is model-agnostic, delivering substantial performance gains when applied to diverse base models like Qwen-Image-Edit and FLUX-Kontext, demonstrating its wide applicability. Code and models are publicly available at https://github.com/PKU-YuanGroup/UniWorld-V2.