X-Dyna: Expressive Dynamic Human Image Animation

📅 2025-01-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing methods for single-image portrait-driven animation generation suffer from rigid motion, facial distortion, and poor background integration. This paper proposes a zero-shot human dynamic animation generation framework. Its core contributions are threefold: (1) a novel lightweight Dynamics-Adapter module that injects motion features from the driving video into a diffusion model via spatial attention, enabling joint modeling of appearance and dynamics; (2) an identity-decoupled local expression control mechanism that overcomes the limitations of pose-dominant paradigms in capturing micro-expressions and fine-grained motions; and (3) a motion-module decoupling architecture with multi-source video joint learning to enhance contextual awareness. Extensive experiments demonstrate that our method significantly outperforms state-of-the-art approaches in both qualitative and quantitative evaluations, producing high-fidelity, vivid, and natural animations. The source code is publicly available.

Technology Category

Application Category

📝 Abstract
We introduce X-Dyna, a novel zero-shot, diffusion-based pipeline for animating a single human image using facial expressions and body movements derived from a driving video, that generates realistic, context-aware dynamics for both the subject and the surrounding environment. Building on prior approaches centered on human pose control, X-Dyna addresses key shortcomings causing the loss of dynamic details, enhancing the lifelike qualities of human video animations. At the core of our approach is the Dynamics-Adapter, a lightweight module that effectively integrates reference appearance context into the spatial attentions of the diffusion backbone while preserving the capacity of motion modules in synthesizing fluid and intricate dynamic details. Beyond body pose control, we connect a local control module with our model to capture identity-disentangled facial expressions, facilitating accurate expression transfer for enhanced realism in animated scenes. Together, these components form a unified framework capable of learning physical human motion and natural scene dynamics from a diverse blend of human and scene videos. Comprehensive qualitative and quantitative evaluations demonstrate that X-Dyna outperforms state-of-the-art methods, creating highly lifelike and expressive animations. The code is available at https://github.com/bytedance/X-Dyna.
Problem

Research questions and friction points this paper is trying to address.

Static-to-Animated Conversion
Unnatural Movements
Low Background Integration
Innovation

Methods, ideas, or system contributions that make the work stand out.

X-Dyna
Dynamics-Adapter
Enhanced Facial Expression
🔎 Similar Papers
No similar papers found.