🤖 AI Summary
Image-based input strategies exhibit limited spatial and temporal reasoning capabilities for mobile manipulation. Method: This paper proposes an end-to-end learning framework for mobile manipulation policies grounded in a 3D latent map. It constructs a voxelized 3D latent feature map via incremental multi-view observation fusion, uses it as the state representation, and couples a pretrained decoder for online latent feature refinement. A 3D feature aggregator is further introduced to support policy training via behavior cloning or reinforcement learning. Contribution/Results: To our knowledge, this is the first framework to explicitly embed an updateable global 3D latent representation into the policy network, significantly improving cross-foveal scene understanding and long-horizon perceptual integration. Experiments demonstrate a 25% absolute success rate improvement over image-only policies on scene-level mobile manipulation and sequential tabletop tasks, along with superior out-of-distribution generalization to novel environments.
📝 Abstract
In this paper, we demonstrate that mobile manipulation policies utilizing a 3D latent map achieve stronger spatial and temporal reasoning than policies relying solely on images. We introduce Seeing the Bigger Picture (SBP), an end-to-end policy learning approach that operates directly on a 3D map of latent features. In SBP, the map extends perception beyond the robot's current field of view and aggregates observations over long horizons. Our mapping approach incrementally fuses multiview observations into a grid of scene-specific latent features. A pre-trained, scene-agnostic decoder reconstructs target embeddings from these features and enables online optimization of the map features during task execution. A policy, trainable with behavior cloning or reinforcement learning, treats the latent map as a state variable and uses global context from the map obtained via a 3D feature aggregator. We evaluate SBP on scene-level mobile manipulation and sequential tabletop manipulation tasks. Our experiments demonstrate that SBP (i) reasons globally over the scene, (ii) leverages the map as long-horizon memory, and (iii) outperforms image-based policies in both in-distribution and novel scenes, e.g., improving the success rate by 25% for the sequential manipulation task.