MotionDuet: Dual-Conditioned 3D Human Motion Generation with Video-Regularized Text Learning

📅 2025-11-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing 3D human motion generation methods face a fundamental trade-off between realism and controllability: text-driven approaches lack biomechanical plausibility, while video-driven methods struggle to interpret high-level semantic intent. To address this, we propose a dual-conditioned (video + text) diffusion-based generative framework. Our key contributions are: (1) Dual-stream Unified Decomposition Encoding (DUET), which jointly models temporal dynamics from video and semantic intent from text; (2) Distribution-Aware Structural Harmonization (DASH) loss—the first to explicitly align video feature distributions with text embeddings in the latent space; and (3) an automatic multimodal guidance mechanism that dynamically balances contributions from both modalities. Leveraging pre-trained models (e.g., VideoMAE) for video representation, combined with dynamic attention fusion and latent-space mapping, our method achieves state-of-the-art performance on AMASS and KIT-ML benchmarks—producing motions with superior realism, diversity, and fine-grained controllability.

Technology Category

Application Category

📝 Abstract
3D Human motion generation is pivotal across film, animation, gaming, and embodied intelligence. Traditional 3D motion synthesis relies on costly motion capture, while recent work shows that 2D videos provide rich, temporally coherent observations of human behavior. Existing approaches, however, either map high-level text descriptions to motion or rely solely on video conditioning, leaving a gap between generated dynamics and real-world motion statistics. We introduce MotionDuet, a multimodal framework that aligns motion generation with the distribution of video-derived representations. In this dual-conditioning paradigm, video cues extracted from a pretrained model (e.g., VideoMAE) ground low-level motion dynamics, while textual prompts provide semantic intent. To bridge the distribution gap across modalities, we propose Dual-stream Unified Encoding and Transformation (DUET) and a Distribution-Aware Structural Harmonization (DASH) loss. DUET fuses video-informed cues into the motion latent space via unified encoding and dynamic attention, while DASH aligns motion trajectories with both distributional and structural statistics of video features. An auto-guidance mechanism further balances textual and visual signals by leveraging a weakened copy of the model, enhancing controllability without sacrificing diversity. Extensive experiments demonstrate that MotionDuet generates realistic and controllable human motions, surpassing strong state-of-the-art baselines.
Problem

Research questions and friction points this paper is trying to address.

Bridges the gap between text descriptions and video-conditioned human motion generation
Aligns generated 3D motions with real-world video statistics and distributions
Enhances motion controllability while maintaining diversity through dual-conditioning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dual-conditioning paradigm with video and text inputs
DUET encoding fuses video cues via dynamic attention
DASH loss aligns motion with video distribution statistics
🔎 Similar Papers
No similar papers found.