🤖 AI Summary
Existing portrait animation methods struggle to achieve high-fidelity disentanglement between head pose and facial expression, severely limiting independent controllability. To address this, we propose the first diffusion-based framework enabling explicit pose–implicit expression full disentanglement. Our method introduces a dual-path motion representation: global geometric pose transformation for explicit pose modeling and latent expression codes for implicit expression learning. We further design a dual-branch conditional injection scheme and a progressive hybrid classifier-free guidance mechanism to enhance conditional control fidelity. Crucially, cross-attention integration enables precise, decoupled modulation of pose and expression. Evaluated under single-image driving, our approach generates high-quality animations with superior identity preservation, motion accuracy, and disentangled controllability—outperforming all state-of-the-art methods across key benchmarks.
📝 Abstract
Portrait animation from a single source image and a driving video is a long-standing problem. Recent approaches tend to adopt diffusion-based image/video generation models for realistic and expressive animation. However, none of these diffusion models realizes high-fidelity disentangled control between the head pose and facial expression, hindering applications like expression-only or pose-only editing and animation. To address this, we propose DeX-Portrait, a novel approach capable of generating expressive portrait animation driven by disentangled pose and expression signals. Specifically, we represent the pose as an explicit global transformation and the expression as an implicit latent code. First, we design a powerful motion trainer to learn both pose and expression encoders for extracting precise and decomposed driving signals. Then we propose to inject the pose transformation into the diffusion model through a dual-branch conditioning mechanism, and the expression latent through cross attention. Finally, we design a progressive hybrid classifier-free guidance for more faithful identity consistency. Experiments show that our method outperforms state-of-the-art baselines on both animation quality and disentangled controllability.