🤖 AI Summary
In pose-guided video generation, simultaneously ensuring appearance consistency (e.g., physique and anatomical proportions) and temporal coherence remains challenging. This paper proposes a training-free enhancement framework to address this issue. Our method introduces (1) structure-motion disentangled modeling, explicitly separating skeletal geometric priors from dynamic motion priors; and (2) pixel-level conditional alignment, jointly leveraging pose-guided geometric calibration and reference-image-driven feature mapping to preserve inter-frame appearance fidelity. This dual-alignment strategy requires no fine-tuning and operates without large-scale annotated data. Experiments demonstrate substantial improvements in physique consistency, proportion stability, and temporal smoothness. Under low-resource settings—without access to task-specific training data or model adaptation—our approach achieves visual quality comparable to supervised training methods. The framework establishes a new paradigm for lightweight, controllable character animation generation, balancing expressiveness, fidelity, and efficiency.
📝 Abstract
Character animation is a transformative field in computer graphics and vision, enabling dynamic and realistic video animations from static images. Despite advancements, maintaining appearance consistency in animations remains a challenge. Our approach addresses this by introducing a training-free framework that ensures the generated video sequence preserves the reference image's subtleties, such as physique and proportions, through a dual alignment strategy. We decouple skeletal and motion priors from pose information, enabling precise control over animation generation. Our method also improves pixel-level alignment for conditional control from the reference character, enhancing the temporal consistency and visual cohesion of animations. Our method significantly enhances the quality of video generation without the need for large datasets or expensive computational resources.