🤖 AI Summary
General Object Composition (GOC) faces a fundamental trade-off between geometric editability and fine-grained appearance fidelity: existing approaches rely on compact semantic embeddings, leading to severe distortions in texture, material, and other high-frequency visual details. To address this, we propose the Decoupled Geometry-Editable and Appearance-Faithful Diffusion framework (DGAD), the first method to fully decouple geometric transformations—such as scaling, rotation, and perspective warping—from source-object appearance modeling. Our approach introduces an implicit geometric representation and a dense cross-attention mechanism to enable pixel-level appearance alignment between reference images and the generated latent space. It synergistically integrates multi-granularity semantic encodings (CLIP/DINO), reference-based appearance extraction, and the spatial reasoning capabilities of pre-trained diffusion models. Extensive experiments across multiple benchmarks demonstrate that DGAD achieves state-of-the-art performance in both geometric control accuracy and appearance fidelity—the two core evaluation metrics.
📝 Abstract
General object composition (GOC) aims to seamlessly integrate a target object into a background scene with desired geometric properties, while simultaneously preserving its fine-grained appearance details. Recent approaches derive semantic embeddings and integrate them into advanced diffusion models to enable geometry-editable generation. However, these highly compact embeddings encode only high-level semantic cues and inevitably discard fine-grained appearance details. We introduce a Disentangled Geometry-editable and Appearance-preserving Diffusion (DGAD) model that first leverages semantic embeddings to implicitly capture the desired geometric transformations and then employs a cross-attention retrieval mechanism to align fine-grained appearance features with the geometry-edited representation, facilitating both precise geometry editing and faithful appearance preservation in object composition. Specifically, DGAD builds on CLIP/DINO-derived and reference networks to extract semantic embeddings and appearance-preserving representations, which are then seamlessly integrated into the encoding and decoding pipelines in a disentangled manner. We first integrate the semantic embeddings into pre-trained diffusion models that exhibit strong spatial reasoning capabilities to implicitly capture object geometry, thereby facilitating flexible object manipulation and ensuring effective editability. Then, we design a dense cross-attention mechanism that leverages the implicitly learned object geometry to retrieve and spatially align appearance features with their corresponding regions, ensuring faithful appearance consistency. Extensive experiments on public benchmarks demonstrate the effectiveness of the proposed DGAD framework.