CPAM: Context-Preserving Adaptive Manipulation for Zero-Shot Real Image Editing

📅 2025-06-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In text-driven realistic image editing, preserving non-rigid object texture/identity, maintaining background fidelity, and avoiding fine-tuning remain challenging. To address these issues, this paper proposes a zero-shot, context-preserving adaptive editing framework. Methodologically, we introduce a mask-guided local feature extraction and adaptive retention module to decouple object and background control; we further enhance self-attention and cross-attention mechanisms to jointly model structural, textural, and semantic consistency. Without any parameter fine-tuning, our approach is evaluated on the newly constructed IMBA benchmark. Quantitative and human evaluations demonstrate significant improvements over state-of-the-art methods: +12.3% in editing accuracy and +18.7% in context preservation—particularly excelling in complex deformations and fine-grained texture editing tasks.

Technology Category

Application Category

📝 Abstract
Editing natural images using textual descriptions in text-to-image diffusion models remains a significant challenge, particularly in achieving consistent generation and handling complex, non-rigid objects. Existing methods often struggle to preserve textures and identity, require extensive fine-tuning, and exhibit limitations in editing specific spatial regions or objects while retaining background details. This paper proposes Context-Preserving Adaptive Manipulation (CPAM), a novel zero-shot framework for complicated, non-rigid real image editing. Specifically, we propose a preservation adaptation module that adjusts self-attention mechanisms to preserve and independently control the object and background effectively. This ensures that the objects' shapes, textures, and identities are maintained while keeping the background undistorted during the editing process using the mask guidance technique. Additionally, we develop a localized extraction module to mitigate the interference with the non-desired modified regions during conditioning in cross-attention mechanisms. We also introduce various mask-guidance strategies to facilitate diverse image manipulation tasks in a simple manner. Extensive experiments on our newly constructed Image Manipulation BenchmArk (IMBA), a robust benchmark dataset specifically designed for real image editing, demonstrate that our proposed method is the preferred choice among human raters, outperforming existing state-of-the-art editing techniques.
Problem

Research questions and friction points this paper is trying to address.

Editing complex non-rigid objects in real images with text
Preserving textures and identity without extensive fine-tuning
Maintaining background details while editing specific regions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adjusts self-attention to preserve objects and background
Localized extraction reduces interference in cross-attention
Mask-guidance strategies enable diverse image manipulation
🔎 Similar Papers
No similar papers found.