UniGen-1.5: Enhancing Image Generation and Editing through Reward Unification in Reinforcement Learning

📅 2025-11-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the fragmentation between image generation and editing tasks, as well as weak comprehension of fine-grained editing instructions. We propose a unified modeling framework based on reinforcement learning (RL). Our method builds upon a multimodal large language model (MLLM) architecture and integrates RL from human feedback (RLHF), cross-task reward sharing, and instruction-aware fine-tuning. Key contributions include: (1) a shared reward model that jointly optimizes generation and editing policies, enabling multitask co-training; and (2) a lightweight instruction-alignment stage that explicitly enhances the MLLM’s understanding and execution of detailed editing intents. Evaluated on GenEval and ImgEdit benchmarks, our approach achieves comprehensive scores of 0.89 and 4.31, respectively—significantly outperforming open-source models such as BAGEL and approaching the performance of closed-source systems like GPT-Image-1.

Technology Category

Application Category

📝 Abstract
We present UniGen-1.5, a unified multimodal large language model (MLLM) for advanced image understanding, generation and editing. Building upon UniGen, we comprehensively enhance the model architecture and training pipeline to strengthen the image understanding and generation capabilities while unlocking strong image editing ability. Especially, we propose a unified Reinforcement Learning (RL) strategy that improves both image generation and image editing jointly via shared reward models. To further enhance image editing performance, we propose a light Edit Instruction Alignment stage that significantly improves the editing instruction comprehension that is essential for the success of the RL training. Experimental results show that UniGen-1.5 demonstrates competitive understanding and generation performance. Specifically, UniGen-1.5 achieves 0.89 and 4.31 overall scores on GenEval and ImgEdit that surpass the state-of-the-art models such as BAGEL and reaching performance comparable to proprietary models such as GPT-Image-1.
Problem

Research questions and friction points this paper is trying to address.

Unifying reinforcement learning rewards for image generation and editing
Enhancing multimodal model architecture for improved image understanding
Improving editing instruction comprehension through alignment training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified Reinforcement Learning strategy for image tasks
Shared reward models improve generation and editing
Light Edit Instruction Alignment enhances editing comprehension
🔎 Similar Papers
No similar papers found.