🤖 AI Summary
To address image quality degradation, artifact accumulation, and limited editing flexibility in latent diffusion models caused by repeated encoding-decoding across iterative edits, this paper proposes REED—a novel training paradigm. REED builds upon a VAE architecture to implement a re-encoding–decoding mechanism, jointly optimizing latent-space consistency constraints and diverse editing operations (e.g., text-guided and mask-guided edits). This design significantly improves reconstruction fidelity and stability under multiple sequential edits. Crucially, REED enables seamless integration of diffusion-based editing with conventional image editing techniques—marking the first method to break the constraint of predefined editing operation sets. Experiments demonstrate that REED substantially suppresses artifacts in both text-driven and mask-guided editing tasks while enhancing overall image editability. By supporting multimodal, multi-step collaborative editing, REED establishes a new benchmark for flexible, high-fidelity generative editing.
📝 Abstract
While latent diffusion models achieve impressive image editing results, their application to iterative editing of the same image is severely restricted. When trying to apply consecutive edit operations using current models, they accumulate artifacts and noise due to repeated transitions between pixel and latent spaces. Some methods have attempted to address this limitation by performing the entire edit chain within the latent space, sacrificing flexibility by supporting only a limited, predetermined set of diffusion editing operations. We present a re‐encode decode (REED) training scheme for variational autoencoders (VAEs), which promotes image quality preservation even after many iterations. Our work enables multi‐method iterative image editing: users can perform a variety of iterative edit operations, with each operation building on the output of the previous one using both diffusion based operations and conventional editing techniques. We demonstrate the advantage of REED‐VAE across a range of image editing scenarios, including text‐based and mask‐based editing frameworks. In addition, we show how REED‐VAE enhances the overall editability of images, increasing the likelihood of successful and precise edit operations. We hope that this work will serve as a benchmark for the newly introduced task of multi‐method image editing.