MIND-V: Hierarchical Video Generation for Long-Horizon Robotic Manipulation with RL-based Physical Alignment

πŸ“… 2025-12-06
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Current robot manipulation video generation models suffer from short duration, low diversity, and reliance on hand-crafted trajectories, limiting their utility for embodied imitation learning requiring long-horizon, physically plausible data. To address this, we propose a hierarchical generative framework integrating vision-language models and world models, structured into three tiers: semantic planner, behavior-semantic bridge, and motion video generator. We introduce a physics-foreseeing consistency reward and staged visual future rollout strategy, and optimize physical plausibility via GRPO-based reinforcement learning and V-JEPA. Our method dynamically aligns semantic planning with dynamics modeling, enabling the first controllable, scalable synthesis of long-horizon (>10 s) robot manipulation videos. Experiments demonstrate state-of-the-art performance in physical plausibility, task-logical coherence, and generation diversity, significantly enhancing both scale and controllability of training data for embodied intelligence.

Technology Category

Application Category

πŸ“ Abstract
Embodied imitation learning is constrained by the scarcity of diverse, long-horizon robotic manipulation data. Existing video generation models for this domain are limited to synthesizing short clips of simple actions and often rely on manually defined trajectories. To this end, we introduce MIND-V, a hierarchical framework designed to synthesize physically plausible and logically coherent videos of long-horizon robotic manipulation. Inspired by cognitive science, MIND-V bridges high-level reasoning with pixel-level synthesis through three core components: a Semantic Reasoning Hub (SRH) that leverages a pre-trained vision-language model for task planning; a Behavioral Semantic Bridge (BSB) that translates abstract instructions into domain-invariant representations; and a Motor Video Generator (MVG) for conditional video rendering. MIND-V employs Staged Visual Future Rollouts, a test-time optimization strategy to enhance long-horizon robustness. To align the generated videos with physical laws, we introduce a GRPO reinforcement learning post-training phase guided by a novel Physical Foresight Coherence (PFC) reward. PFC leverages the V-JEPA world model to enforce physical plausibility by aligning the predicted and actual dynamic evolutions in the feature space. MIND-V demonstrates state-of-the-art performance in long-horizon robotic manipulation video generation, establishing a scalable and controllable paradigm for embodied data synthesis.
Problem

Research questions and friction points this paper is trying to address.

Generates long-horizon robotic manipulation videos from high-level instructions
Ensures physical plausibility in generated videos using reinforcement learning
Synthesizes diverse and scalable embodied data for imitation learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hierarchical framework with semantic reasoning and video generation
Reinforcement learning post-training with physical foresight reward
Staged visual future rollouts for long-horizon robustness
πŸ”Ž Similar Papers
No similar papers found.