Controllable and Expressive One-Shot Video Head Swapping

πŸ“… 2025-06-20
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing head swapping methods struggle to simultaneously preserve hairstyle diversity, handle complex backgrounds, and enable fine-grained facial expression control. To address this, we propose the first single-image-to-video controllable head swapping framework that enables disentangled editing of identity, expression, and pose. Our method integrates 3D Morphable Model (3DMM)-based disentangled expression retargeting with shape-agnostic mask fusion, augmented by identity-aware contextual modeling, hairstyle enhancement, keypoint-scale-aware retargeting, and latent diffusion-based generation. It achieves high-fidelity head transfer while preserving the original video’s body motion and background, and supports post-hoc expression and pose editing. Experiments demonstrate a 23.6% improvement in identity preservation, state-of-the-art expression naturalness, cross-identity transfer between real and virtual characters, and seamless integration in challenging scenes with diverse backgrounds and hairstyles.

Technology Category

Application Category

πŸ“ Abstract
In this paper, we propose a novel diffusion-based multi-condition controllable framework for video head swapping, which seamlessly transplant a human head from a static image into a dynamic video, while preserving the original body and background of target video, and further allowing to tweak head expressions and movements during swapping as needed. Existing face-swapping methods mainly focus on localized facial replacement neglecting holistic head morphology, while head-swapping approaches struggling with hairstyle diversity and complex backgrounds, and none of these methods allow users to modify the transplanted head expressions after swapping. To tackle these challenges, our method incorporates several innovative strategies through a unified latent diffusion paradigm. 1) Identity-preserving context fusion: We propose a shape-agnostic mask strategy to explicitly disentangle foreground head identity features from background/body contexts, combining hair enhancement strategy to achieve robust holistic head identity preservation across diverse hair types and complex backgrounds. 2) Expression-aware landmark retargeting and editing: We propose a disentangled 3DMM-driven retargeting module that decouples identity, expression, and head poses, minimizing the impact of original expressions in input images and supporting expression editing. While a scale-aware retargeting strategy is further employed to minimize cross-identity expression distortion for higher transfer precision. Experimental results demonstrate that our method excels in seamless background integration while preserving the identity of the source portrait, as well as showcasing superior expression transfer capabilities applicable to both real and virtual characters.
Problem

Research questions and friction points this paper is trying to address.

Seamlessly transplant human head from image to video
Preserve body and background while tweaking expressions
Address hairstyle diversity and complex backgrounds effectively
Innovation

Methods, ideas, or system contributions that make the work stand out.

Diffusion-based multi-condition controllable framework
Shape-agnostic mask for identity preservation
3DMM-driven retargeting for expression editing
πŸ”Ž Similar Papers
No similar papers found.