EnerVerse-AC: Envisioning Embodied Environments with Action Condition

📅 2025-05-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high testing cost and reliance on physical interaction or complex simulation in robot imitation learning for dynamic interactive scenarios, this paper proposes EnerVerse-AC, an action-conditioned world model. Methodologically, it introduces a novel multi-level action-conditioning mechanism and ray-map encoding to synthesize high-fidelity, multi-view dynamic visual observations conditioned on input actions—enabling policy evaluation and data augmentation without physical robots or heavy simulation. Furthermore, a failure-trajectory-augmented data strategy enhances out-of-distribution generalization and action controllability. Empirically, on multiple robotic manipulation tasks, EnerVerse-AC achieves video prediction quality comparable to real-world observations, improves policy evaluation accuracy by 23%, and reduces both training and testing costs by over 70%.

Technology Category

Application Category

📝 Abstract
Robotic imitation learning has advanced from solving static tasks to addressing dynamic interaction scenarios, but testing and evaluation remain costly and challenging due to the need for real-time interaction with dynamic environments. We propose EnerVerse-AC (EVAC), an action-conditional world model that generates future visual observations based on an agent's predicted actions, enabling realistic and controllable robotic inference. Building on prior architectures, EVAC introduces a multi-level action-conditioning mechanism and ray map encoding for dynamic multi-view image generation while expanding training data with diverse failure trajectories to improve generalization. As both a data engine and evaluator, EVAC augments human-collected trajectories into diverse datasets and generates realistic, action-conditioned video observations for policy testing, eliminating the need for physical robots or complex simulations. This approach significantly reduces costs while maintaining high fidelity in robotic manipulation evaluation. Extensive experiments validate the effectiveness of our method. Code, checkpoints, and datasets can be found at.
Problem

Research questions and friction points this paper is trying to address.

Reducing costs in robotic imitation learning evaluation
Generating realistic action-conditioned visual observations
Improving generalization with diverse failure trajectories
Innovation

Methods, ideas, or system contributions that make the work stand out.

Action-conditional world model for visual predictions
Multi-level action-conditioning and ray map encoding
Augments data with failure trajectories for generalization
🔎 Similar Papers
No similar papers found.
Y
Yuxin Jiang
AgiBot
Shengcong Chen
Shengcong Chen
Unknown affiliation
World ModelComputer VisionEmbodied AIMedical Image Analysis
L
Liliang Chen
AgiBot
P
Pengfei Zhou
AgiBot
Yue Liao
Yue Liao
National University of Singapore
Computer VisionDeep LearningMLLM
X
Xindong He
AgiBot
C
Chiming Liu
AgiBot
H
Hongsheng Li
MMLab-CUHK
Maoqing Yao
Maoqing Yao
Google
G
Guanghui Ren
AgiBot