🤖 AI Summary
Existing methods for 4D video generation rely on precise camera poses and struggle to ensure geometric consistency across viewpoints, limiting their applicability to robot dynamic planning and interactive tasks. Method: We propose a novel approach that jointly models RGB-D temporal dynamics and 4D spatiotemporal generation, incorporating a geometry-aware cross-view point-map alignment supervision—learned without pose inputs—to enforce a shared, coherent 3D scene representation. This enables stable novel-view video prediction and direct recovery of 6DoF end-effector trajectories from generated videos. Contribution/Results: Evaluated on multiple simulated and real-robot datasets, our method significantly improves visual stability and spatial-geometric alignment accuracy while demonstrating strong cross-view generalization. It establishes a differentiable, geometrically faithful foundation for future-state prediction, enabling vision-based closed-loop robotic control.
📝 Abstract
Understanding and predicting the dynamics of the physical world can enhance a robot's ability to plan and interact effectively in complex environments. While recent video generation models have shown strong potential in modeling dynamic scenes, generating videos that are both temporally coherent and geometrically consistent across camera views remains a significant challenge. To address this, we propose a 4D video generation model that enforces multi-view 3D consistency of videos by supervising the model with cross-view pointmap alignment during training. This geometric supervision enables the model to learn a shared 3D representation of the scene, allowing it to predict future video sequences from novel viewpoints based solely on the given RGB-D observations, without requiring camera poses as inputs. Compared to existing baselines, our method produces more visually stable and spatially aligned predictions across multiple simulated and real-world robotic datasets. We further show that the predicted 4D videos can be used to recover robot end-effector trajectories using an off-the-shelf 6DoF pose tracker, supporting robust robot manipulation and generalization to novel camera viewpoints.