Towards Generalized Multi-Image Editing for Unified Multimodal Models

📅 2026-01-09
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge that existing unified multimodal models struggle to maintain visual consistency and accurately disentangle visual cues from multiple input images in multi-image editing tasks. To overcome this limitation, the authors propose a scalable multi-image editing framework that introduces learnable latent separators to decouple conditional information from individual reference images and incorporates sinusoidal index encoding to enable identity-aware generalization across an arbitrary number of inputs. The effectiveness of the approach is validated on a high-quality, reverse-engineered multi-image editing benchmark dataset. Experimental results demonstrate that the proposed framework significantly outperforms current methods in semantic consistency, visual fidelity, and cross-image fusion, highlighting its superior capability in achieving both consistent editing and robust generalization.

Technology Category

Application Category

📝 Abstract
Unified Multimodal Models (UMMs) integrate multimodal understanding and generation, yet they are limited to maintaining visual consistency and disambiguating visual cues when referencing details across multiple input images. In this work, we propose a scalable multi-image editing framework for UMMs that explicitly distinguishes image identities and generalizes to variable input counts. Algorithmically, we introduce two innovations: 1) The learnable latent separators explicitly differentiate each reference image in the latent space, enabling accurate and disentangled conditioning. 2) The sinusoidal index encoding assigns visual tokens from the same image a continuous sinusoidal index embedding, which provides explicit image identity while allowing generalization and extrapolation on a variable number of inputs. To facilitate training and evaluation, we establish a high-fidelity benchmark using an inverse dataset construction methodology to guarantee artifact-free, achievable outputs. Experiments show clear improvements in semantic consistency, visual fidelity, and cross-image integration over prior baselines on diverse multi-image editing tasks, validating our advantages on consistency and generalization ability.
Problem

Research questions and friction points this paper is trying to address.

multi-image editing
visual consistency
image identity disambiguation
unified multimodal models
cross-image reference
Innovation

Methods, ideas, or system contributions that make the work stand out.

learnable latent separators
sinusoidal index encoding
multi-image editing
unified multimodal models
visual consistency
🔎 Similar Papers
No similar papers found.