ORV: 4D Occupancy-centric Robot Video Generation

📅 2025-06-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address coarse-grained action control, poor generalization, and multi-view inconsistency in robot simulation video generation, this paper proposes a generative framework based on 4D semantic occupancy sequences. Methodologically, it introduces the first diffusion-based video synthesis paradigm driven by 4D occupancy representations, integrating a temporally consistent diffusion architecture, multi-view geometric consistency losses, and embedded robot motion priors to enable fine-grained spatiotemporal controllability and synchronized multi-view output. Compared with conventional action-sequence modeling approaches, our framework significantly improves geometric-semantic guidance accuracy and cross-task generalization capability. Extensive evaluations on multiple robotic manipulation datasets demonstrate that the generated videos achieve superior fidelity, temporal coherence, and action alignment accuracy over state-of-the-art methods. To foster reproducibility and community advancement, we publicly release our code, pre-trained models, and interactive demos.

Technology Category

Application Category

📝 Abstract
Acquiring real-world robotic simulation data through teleoperation is notoriously time-consuming and labor-intensive. Recently, action-driven generative models have gained widespread adoption in robot learning and simulation, as they eliminate safety concerns and reduce maintenance efforts. However, the action sequences used in these methods often result in limited control precision and poor generalization due to their globally coarse alignment. To address these limitations, we propose ORV, an Occupancy-centric Robot Video generation framework, which utilizes 4D semantic occupancy sequences as a fine-grained representation to provide more accurate semantic and geometric guidance for video generation. By leveraging occupancy-based representations, ORV enables seamless translation of simulation data into photorealistic robot videos, while ensuring high temporal consistency and precise controllability. Furthermore, our framework supports the simultaneous generation of multi-view videos of robot gripping operations - an important capability for downstream robotic learning tasks. Extensive experimental results demonstrate that ORV consistently outperforms existing baseline methods across various datasets and sub-tasks. Demo, Code and Model: https://orangesodahub.github.io/ORV
Problem

Research questions and friction points this paper is trying to address.

Teleoperation data collection is time-consuming and labor-intensive
Action-driven models lack precision and generalization in robot simulation
ORV uses 4D occupancy for fine-grained video generation guidance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses 4D semantic occupancy sequences
Leverages occupancy-based fine-grained representation
Supports multi-view robot video generation
X
Xiuyu Yang
Beijing Academy of Artificial Intelligence
B
Bohan Li
Shanghai Jiao Tong University
Shaocong Xu
Shaocong Xu
Xiamen University
open-set perceptionvision-language perceptiondiffusion-based perceptionmachine learning
N
Nan Wang
Beijing Academy of Artificial Intelligence
Chongjie Ye
Chongjie Ye
The Chinese University of Hong Kong, Shenzhen
Computer Vision
Zhaoxi Chen
Zhaoxi Chen
Ph.D. Student, Nanyang Technological University
Neural renderingGenerative models
Minghan Qin
Minghan Qin
Bytedance Research | Tsinghua University
Computer Vision3D Vision3D Scene Perception
Yikang Ding
Yikang Ding
Tsinghua University
3D VisionGenerative Model
X
Xin Jin
Eastern Institute of Technology, Ningbo
H
Hang Zhao
IIIS, Tsinghua University
H
Hao Zhao
Beijing Academy of Artificial Intelligence, AIR, Tsinghua University