🤖 AI Summary
This work addresses the challenge of maintaining cross-reference consistency in multi-reference image editing with diffusion models, which often suffer from insufficient interaction among reference images. To this end, the authors propose a unified generative framework that integrates single-image editing and multi-image synthesis. Central to this approach is the novel Sequential Latent Fusion (SELF) representation, which encodes multiple reference images into a unified latent sequence. The framework further leverages supervised fine-tuning (SFT) and progressive sequence-length training to enhance output fidelity. Additionally, a GRPO-based reinforcement learning strategy tailored for multi-source references is introduced to significantly improve visual consistency and detail preservation in edited results. The code, models, and data will be publicly released.
📝 Abstract
We present UniRef-Image-Edit, a high-performance multi-modal generation system that unifies single-image editing and multi-image composition within a single framework. Existing diffusion-based editing methods often struggle to maintain consistency across multiple conditions due to limited interaction between reference inputs. To address this, we introduce Sequence-Extended Latent Fusion (SELF), a unified input representation that dynamically serializes multiple reference images into a coherent latent sequence. During a dedicated training stage, all reference images are jointly constrained to fit within a fixed-length sequence under a global pixel-budget constraint. Building upon SELF, we propose a two-stage training framework comprising supervised fine-tuning (SFT) and reinforcement learning (RL). In the SFT stage, we jointly train on single-image editing and multi-image composition tasks to establish a robust generative prior. We adopt a progressive sequence length training strategy, in which all input images are initially resized to a total pixel budget of $1024^2$, and are then gradually increased to $1536^2$ and $2048^2$ to improve visual fidelity and cross-reference consistency. This gradual relaxation of compression enables the model to incrementally capture finer visual details while maintaining stable alignment across references. For the RL stage, we introduce Multi-Source GRPO (MSGRPO), to our knowledge the first reinforcement learning framework tailored for multi-reference image generation. MSGRPO optimizes the model to reconcile conflicting visual constraints, significantly enhancing compositional consistency. We will open-source the code, models, training data, and reward data for community research purposes.