🤖 AI Summary
Existing diffusion models are computationally and memory-intensive, limiting their ability to generate long, temporally coherent upper-body human animations. To address this, we propose a feedback-guided diffusion mechanism that enables arbitrary-length, high-fidelity, and temporally consistent facial expression and gesture animation—without increasing model parameters, computational cost, or requiring additional training. Built upon the Stable Diffusion architecture, our method takes a single portrait image and a driving pose sequence as input, and dynamically corrects inter-frame generation drift via frame-to-frame feedback, substantially improving temporal coherence. To facilitate rigorous evaluation, we introduce the first large-scale upper-body animation benchmark dataset. Experiments demonstrate superior temporal consistency and detail fidelity in long-duration generation (>100 frames), along with strong generalization across diverse subjects and motions. Our approach establishes a novel paradigm for long-video synthesis.
📝 Abstract
Recent advancements in diffusion models have significantly improved the realism and generalizability of character-driven animation, enabling the synthesis of high-quality motion from just a single RGB image and a set of driving poses. Nevertheless, generating temporally coherent long-form content remains challenging. Existing approaches are constrained by computational and memory limitations, as they are typically trained on short video segments, thus performing effectively only over limited frame lengths and hindering their potential for extended coherent generation. To address these constraints, we propose TalkingPose, a novel diffusion-based framework specifically designed for producing long-form, temporally consistent human upper-body animations. TalkingPose leverages driving frames to precisely capture expressive facial and hand movements, transferring these seamlessly to a target actor through a stable diffusion backbone. To ensure continuous motion and enhance temporal coherence, we introduce a feedback-driven mechanism built upon image-based diffusion models. Notably, this mechanism does not incur additional computational costs or require secondary training stages, enabling the generation of animations with unlimited duration. Additionally, we introduce a comprehensive, large-scale dataset to serve as a new benchmark for human upper-body animation.