🤖 AI Summary
Existing 2D portrait animation methods struggle to simultaneously achieve visual realism and geometric consistency, particularly due to bottlenecks in jointly modeling RGB appearance and depth information. To address this, we propose—within a diffusion model framework—the first appearance-depth joint conditional generation paradigm. Our method introduces a reference-network-guided mechanism and a channel-expanded U-Net backbone, enabling end-to-end, single-model co-generation of RGB images and corresponding depth maps. By leveraging joint conditional modeling and multi-task fine-tuning, the approach enforces cross-task 3D geometric consistency. Experiments demonstrate that our framework unifies multiple tasks—including high-fidelity facial depth estimation, bidirectional RGB-depth translation, relighting, and audio-driven talking-head animation—while preserving visual quality and significantly improving depth fidelity and 3D structural consistency.
📝 Abstract
2D portrait animation has experienced significant advancements in recent years. Much research has utilized the prior knowledge embedded in large generative diffusion models to enhance high-quality image manipulation. However, most methods only focus on generating RGB images as output, and the co-generation of consistent visual plus 3D output remains largely under-explored. In our work, we propose to jointly learn the visual appearance and depth simultaneously in a diffusion-based portrait image generator. Our method embraces the end-to-end diffusion paradigm and introduces a new architecture suitable for learning this conditional joint distribution, consisting of a reference network and a channel-expanded diffusion backbone. Once trained, our framework can be efficiently adapted to various downstream applications, such as facial depth-to-image and image-to-depth generation, portrait relighting, and audio-driven talking head animation with consistent 3D output.