Inverse-and-Edit: Effective and Fast Image Editing by Cycle Consistency Models

📅 2025-06-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing distilled diffusion models accelerate inference but suffer from severe distortion in the reverse process, leading to insufficient editing fidelity and structural consistency. To address this, we propose a cycle-consistency optimization framework grounded in consistency models, incorporating bidirectional reconstruction constraints and a fast reverse mapping mechanism to significantly enhance the editing capability of distilled models. Our method achieves high-fidelity image editing within only four sampling steps, balancing semantic integrity, content preservation, and editing flexibility. Extensive experiments across diverse editing tasks—including object replacement and attribute manipulation—on standard benchmarks demonstrate performance on par with full-step diffusion models, while accelerating inference by over an order of magnitude. The source code is publicly available.

Technology Category

Application Category

📝 Abstract
Recent advances in image editing with diffusion models have achieved impressive results, offering fine-grained control over the generation process. However, these methods are computationally intensive because of their iterative nature. While distilled diffusion models enable faster inference, their editing capabilities remain limited, primarily because of poor inversion quality. High-fidelity inversion and reconstruction are essential for precise image editing, as they preserve the structural and semantic integrity of the source image. In this work, we propose a novel framework that enhances image inversion using consistency models, enabling high-quality editing in just four steps. Our method introduces a cycle-consistency optimization strategy that significantly improves reconstruction accuracy and enables a controllable trade-off between editability and content preservation. We achieve state-of-the-art performance across various image editing tasks and datasets, demonstrating that our method matches or surpasses full-step diffusion models while being substantially more efficient. The code of our method is available on GitHub at https://github.com/ControlGenAI/Inverse-and-Edit.
Problem

Research questions and friction points this paper is trying to address.

Improves image inversion quality for precise editing
Reduces computational steps for faster image editing
Balances editability and content preservation effectively
Innovation

Methods, ideas, or system contributions that make the work stand out.

Cycle consistency models enhance image inversion
Four-step high-quality editing with consistency models
Cycle-consistency optimization improves reconstruction accuracy
🔎 Similar Papers
No similar papers found.