DeX-Portrait: Disentangled and Expressive Portrait Animation via Explicit and Latent Motion Representations

📅 2025-12-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing portrait animation methods struggle to achieve high-fidelity disentanglement between head pose and facial expression, severely limiting independent controllability. To address this, we propose the first diffusion-based framework enabling explicit pose–implicit expression full disentanglement. Our method introduces a dual-path motion representation: global geometric pose transformation for explicit pose modeling and latent expression codes for implicit expression learning. We further design a dual-branch conditional injection scheme and a progressive hybrid classifier-free guidance mechanism to enhance conditional control fidelity. Crucially, cross-attention integration enables precise, decoupled modulation of pose and expression. Evaluated under single-image driving, our approach generates high-quality animations with superior identity preservation, motion accuracy, and disentangled controllability—outperforming all state-of-the-art methods across key benchmarks.

Technology Category

Application Category

📝 Abstract
Portrait animation from a single source image and a driving video is a long-standing problem. Recent approaches tend to adopt diffusion-based image/video generation models for realistic and expressive animation. However, none of these diffusion models realizes high-fidelity disentangled control between the head pose and facial expression, hindering applications like expression-only or pose-only editing and animation. To address this, we propose DeX-Portrait, a novel approach capable of generating expressive portrait animation driven by disentangled pose and expression signals. Specifically, we represent the pose as an explicit global transformation and the expression as an implicit latent code. First, we design a powerful motion trainer to learn both pose and expression encoders for extracting precise and decomposed driving signals. Then we propose to inject the pose transformation into the diffusion model through a dual-branch conditioning mechanism, and the expression latent through cross attention. Finally, we design a progressive hybrid classifier-free guidance for more faithful identity consistency. Experiments show that our method outperforms state-of-the-art baselines on both animation quality and disentangled controllability.
Problem

Research questions and friction points this paper is trying to address.

Disentangles head pose and facial expression control in portrait animation
Generates expressive animation using explicit and latent motion representations
Enables independent editing of pose and expression for enhanced controllability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Explicit pose transformation via dual-branch conditioning
Implicit expression latent injection via cross attention
Progressive hybrid classifier-free guidance for identity consistency
🔎 Similar Papers
No similar papers found.
Y
Yuxiang Shi
University of Science and Technology of China
Z
Zhe Li
Central Media Technology Institute, Huawei
Y
Yanwen Wang
Nanjing University
H
Hao Zhu
Nanjing University
Xun Cao
Xun Cao
Nanjing University
Computational PhotographyComputational ImagingImage & Video Processing
Ligang Liu
Ligang Liu
University of Science and Technology of China
Computer GraphicsGeometry Processing3D Printing