🤖 AI Summary
This work addresses the problem of collision-free motion policy generation from a single RGB image. We propose the Video-Generated Environment Representation (VGER) framework, which bypasses error-prone monocular depth estimation by reconstructing dense point clouds via multi-view video generation. VGER jointly learns implicit geometric representations and motion policies through an integrated architecture combining large-scale video diffusion models, pre-trained 3D foundation models (e.g., Point-E), implicit neural representations (INRs), and multi-scale noise modeling. Given only one input RGB image, VGER generates geometrically consistent, smooth, continuous, and collision-free trajectories across diverse indoor and outdoor scenes. Our key contribution is the first integration of video generation into environment geometry modeling for end-to-end motion planning—eliminating explicit depth prediction. This yields significant improvements in trajectory accuracy and cross-scene generalization capability.
📝 Abstract
Autonomous robots typically need to construct representations of their surroundings and adapt their motions to the geometry of their environment. Here, we tackle the problem of constructing a policy model for collision-free motion generation, consistent with the environment, from a single input RGB image. Extracting 3D structures from a single image often involves monocular depth estimation. Developments in depth estimation have given rise to large pre-trained models such as DepthAnything. However, using outputs of these models for downstream motion generation is challenging due to frustum-shaped errors that arise. Instead, we propose a framework known as Video-Generation Environment Representation (VGER), which leverages the advances of large-scale video generation models to generate a moving camera video conditioned on the input image. Frames of this video, which form a multiview dataset, are then input into a pre-trained 3D foundation model to produce a dense point cloud. We then introduce a multi-scale noise approach to train an implicit representation of the environment structure and build a motion generation model that complies with the geometry of the representation. We extensively evaluate VGER over a diverse set of indoor and outdoor environments. We demonstrate its ability to produce smooth motions that account for the captured geometry of a scene, all from a single RGB input image.