🤖 AI Summary
Existing reference-based image generation methods are limited to single 2D images and struggle to ensure consistency with 3D structure. This work proposes the first diffusion-based approach that leverages 3D assets as reference inputs, introducing a dual-branch cross-domain diffusion architecture that jointly models multi-view RGB images and point maps. By incorporating a spatial alignment mechanism and a domain disentanglement strategy, the method achieves precise alignment between generated images and the 3D reference in both color and canonical coordinate space. The approach significantly enhances geometric consistency between 2D outputs and underlying 3D assets, demonstrating the effectiveness and potential of integrating diffusion models with 3D content creation.
📝 Abstract
In this paper, we propose a 3D asset-referenced diffusion model for image generation, exploring how to integrate 3D assets into image diffusion models. Existing reference-based image generation methods leverage large-scale pretrained diffusion models and demonstrate strong capability in generating diverse images conditioned on a single reference image. However, these methods are limited to single-image references and cannot leverage 3D assets, constraining their practical versatility. To address this gap, we present a cross-domain diffusion model with dual-branch perception that leverages multi-view RGB images and point maps of 3D assets to jointly model their colors and canonical-space coordinates, achieving precise consistency between generated images and the 3D references. Our spatially aligned dual-branch generation architecture and domain-decoupled generation mechanism ensure the simultaneous generation of two spatially aligned but content-disentangled outputs, RGB images and point maps, linking 2D image attributes with 3D asset attributes. Experiments show that our approach effectively uses 3D assets as references to produce images consistent with the given assets, opening new possibilities for combining diffusion models with 3D content creation.