🤖 AI Summary
Existing approaches to avatar animation suffer from identity distortion, background jitter, unnatural facial dynamics, and poor body-proportion adaptation. This paper introduces SkyReels-A1, a novel high-fidelity avatar animation framework built upon the Video Diffusion Transformer (Video DiT). It incorporates an expression-aware conditioning module for fine-grained motion control; a facial image–text alignment mechanism that jointly encodes identity features and action semantics; and a multi-stage progressive training paradigm that jointly optimizes identity stability and expression authenticity. Extensive evaluations demonstrate significant improvements in visual coherence, identity fidelity, and temporal consistency—both on avatar-specific benchmarks and across diverse body morphologies. The method is validated in practical applications including virtual avatars, remote telepresence, and digital media production, establishing new state-of-the-art performance.
📝 Abstract
We present SkyReels-A1, a simple yet effective framework built upon video diffusion Transformer to facilitate portrait image animation. Existing methodologies still encounter issues, including identity distortion, background instability, and unrealistic facial dynamics, particularly in head-only animation scenarios. Besides, extending to accommodate diverse body proportions usually leads to visual inconsistencies or unnatural articulations. To address these challenges, SkyReels-A1 capitalizes on the strong generative capabilities of video DiT, enhancing facial motion transfer precision, identity retention, and temporal coherence. The system incorporates an expression-aware conditioning module that enables seamless video synthesis driven by expression-guided landmark inputs. Integrating the facial image-text alignment module strengthens the fusion of facial attributes with motion trajectories, reinforcing identity preservation. Additionally, SkyReels-A1 incorporates a multi-stage training paradigm to incrementally refine the correlation between expressions and motion while ensuring stable identity reproduction. Extensive empirical evaluations highlight the model's ability to produce visually coherent and compositionally diverse results, making it highly applicable to domains such as virtual avatars, remote communication, and digital media generation.