🤖 AI Summary
Existing video diffusion models (VDMs) remain underexplored for general-purpose text-to-human-motion video generation, with most prior work restricted to image-to-video translation or narrow domains (e.g., dance). To address this gap, we propose CAMEO—a novel end-to-end cascaded framework that systematically integrates a text-to-motion (T2M) model with a conditional video diffusion model. Our key contributions are: (1) a text-visual cross-stage alignment mechanism ensuring semantic consistency between textual intent and generated motion; and (2) a camera-aware conditioning module that automatically infers viewpoint parameters from text descriptions to enhance spatial plausibility. Through motion-aligned training, viewpoint-aware prompt engineering, and joint optimization, CAMEO achieves state-of-the-art performance on MovieGen and a newly introduced T2M-VDM benchmark—enabling high-fidelity, temporally coherent human motion video generation across diverse scenarios.
📝 Abstract
Human video generation is becoming an increasingly important task with broad applications in graphics, entertainment, and embodied AI. Despite the rapid progress of video diffusion models (VDMs), their use for general-purpose human video generation remains underexplored, with most works constrained to image-to-video setups or narrow domains like dance videos. In this work, we propose CAMEO, a cascaded framework for general human motion video generation. It seamlessly bridges Text-to-Motion (T2M) models and conditional VDMs, mitigating suboptimal factors that may arise in this process across both training and inference through carefully designed components. Specifically, we analyze and prepare both textual prompts and visual conditions to effectively train the VDM, ensuring robust alignment between motion descriptions, conditioning signals, and the generated videos. Furthermore, we introduce a camera-aware conditioning module that connects the two stages, automatically selecting viewpoints aligned with the input text to enhance coherence and reduce manual intervention. We demonstrate the effectiveness of our approach on both the MovieGen benchmark and a newly introduced benchmark tailored to the T2M-VDM combination, while highlighting its versatility across diverse use cases.