🤖 AI Summary
Existing methods struggle to simultaneously preserve fine-grained facial and hand details while maintaining spatiotemporal consistency in long-duration (>3-second) human image animation. To address this, we propose a high-fidelity long-video generation framework based on the Diffusion Transformer (DiT). Our approach introduces three key innovations: (1) a novel hybrid implicit guidance signal combined with a sharpness-aware guidance factor; (2) a temporal-aware positional offset adaptation module enabling arbitrary-length video synthesis; and (3) skeleton-aligned modeling coupled with identity-agnostic data augmentation. These components collectively enhance fine-grained structural modeling and inter-frame coherence. Quantitative and qualitative evaluations demonstrate state-of-the-art performance in critical metrics—including facial expression fidelity, hand motion dynamics, and temporal smoothness—while achieving superior visual quality and strong spatiotemporal consistency.
📝 Abstract
Recent progress in diffusion models has significantly advanced the field of human image animation. While existing methods can generate temporally consistent results for short or regular motions, significant challenges remain, particularly in generating long-duration videos. Furthermore, the synthesis of fine-grained facial and hand details remains under-explored, limiting the applicability of current approaches in real-world, high-quality applications. To address these limitations, we propose a diffusion transformer (DiT)-based framework which focuses on generating high-fidelity and long-duration human animation videos. First, we design a set of hybrid implicit guidance signals and a sharpness guidance factor, enabling our framework to additionally incorporate detailed facial and hand features as guidance. Next, we incorporate the time-aware position shift fusion module, modify the input format within the DiT backbone, and refer to this mechanism as the Position Shift Adaptive Module, which enables video generation of arbitrary length. Finally, we introduce a novel data augmentation strategy and a skeleton alignment model to reduce the impact of human shape variations across different identities. Experimental results demonstrate that our method outperforms existing state-of-the-art approaches, achieving superior performance in both high-fidelity and long-duration human image animation.