🤖 AI Summary
Existing image editing and inpainting methods emphasize semantic control but commonly neglect global style consistency in edited outputs. To address this, we propose a diffusion-based framework explicitly designed for style-consistent editing. Our approach features a dual-path cross-attention mechanism that disentangles textual semantics from visual style representations; a Progressive Self-Style Representation Learning (PSRL) module enabling fine-grained style modeling; and a style contrastive loss that explicitly enforces intra-image regional style consistency. Furthermore, we introduce the first comprehensive benchmark dedicated to quantitative evaluation of style consistency. Extensive experiments demonstrate that our method achieves significant improvements over state-of-the-art approaches in both semantic fidelity and style coherence, markedly enhancing the photorealism and visual continuity of edited results.
📝 Abstract
Maintaining stylistic consistency is crucial for the cohesion and aesthetic appeal of images, a fundamental requirement in effective image editing and inpainting. However, existing methods primarily focus on the semantic control of generated content, often neglecting the critical task of preserving this consistency. In this work, we introduce the Neural Scene Designer (NSD), a novel framework that enables photo-realistic manipulation of user-specified scene regions while ensuring both semantic alignment with user intent and stylistic consistency with the surrounding environment. NSD leverages an advanced diffusion model, incorporating two parallel cross-attention mechanisms that separately process text and style information to achieve the dual objectives of semantic control and style consistency. To capture fine-grained style representations, we propose the Progressive Self-style Representational Learning (PSRL) module. This module is predicated on the intuitive premise that different regions within a single image share a consistent style, whereas regions from different images exhibit distinct styles. The PSRL module employs a style contrastive loss that encourages high similarity between representations from the same image while enforcing dissimilarity between those from different images. Furthermore, to address the lack of standardized evaluation protocols for this task, we establish a comprehensive benchmark. This benchmark includes competing algorithms, dedicated style-related metrics, and diverse datasets and settings to facilitate fair comparisons. Extensive experiments conducted on our benchmark demonstrate the effectiveness of the proposed framework.