Seeing the Bigger Picture: 3D Latent Mapping for Mobile Manipulation Policy Learning

📅 2025-10-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Image-based input strategies exhibit limited spatial and temporal reasoning capabilities for mobile manipulation. Method: This paper proposes an end-to-end learning framework for mobile manipulation policies grounded in a 3D latent map. It constructs a voxelized 3D latent feature map via incremental multi-view observation fusion, uses it as the state representation, and couples a pretrained decoder for online latent feature refinement. A 3D feature aggregator is further introduced to support policy training via behavior cloning or reinforcement learning. Contribution/Results: To our knowledge, this is the first framework to explicitly embed an updateable global 3D latent representation into the policy network, significantly improving cross-foveal scene understanding and long-horizon perceptual integration. Experiments demonstrate a 25% absolute success rate improvement over image-only policies on scene-level mobile manipulation and sequential tabletop tasks, along with superior out-of-distribution generalization to novel environments.

Technology Category

Application Category

📝 Abstract
In this paper, we demonstrate that mobile manipulation policies utilizing a 3D latent map achieve stronger spatial and temporal reasoning than policies relying solely on images. We introduce Seeing the Bigger Picture (SBP), an end-to-end policy learning approach that operates directly on a 3D map of latent features. In SBP, the map extends perception beyond the robot's current field of view and aggregates observations over long horizons. Our mapping approach incrementally fuses multiview observations into a grid of scene-specific latent features. A pre-trained, scene-agnostic decoder reconstructs target embeddings from these features and enables online optimization of the map features during task execution. A policy, trainable with behavior cloning or reinforcement learning, treats the latent map as a state variable and uses global context from the map obtained via a 3D feature aggregator. We evaluate SBP on scene-level mobile manipulation and sequential tabletop manipulation tasks. Our experiments demonstrate that SBP (i) reasons globally over the scene, (ii) leverages the map as long-horizon memory, and (iii) outperforms image-based policies in both in-distribution and novel scenes, e.g., improving the success rate by 25% for the sequential manipulation task.
Problem

Research questions and friction points this paper is trying to address.

Learning mobile manipulation policies using 3D latent maps
Enhancing spatial-temporal reasoning beyond current field of view
Aggregating long-horizon observations for global scene understanding
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses 3D latent map for spatial-temporal reasoning
Incrementally fuses multiview observations into grid
Enables online optimization during task execution
🔎 Similar Papers
No similar papers found.
S
Sunghwan Kim
Department of Electrical and Computer Engineering, University of California San Diego, La Jolla, CA 92093, USA
Woojeh Chung
Woojeh Chung
University of California, San Diego
RoboticsComputer Vision
Zhirui Dai
Zhirui Dai
UC San Diego
Robotics
Dwait Bhatt
Dwait Bhatt
Robotics Graduate Student, UC San Diego
Reinforcement LearningRoboticsMachine LearningOn-Device AI
Arth Shukla
Arth Shukla
CS PhD Student, University of California - San Diego
Robot learningsimulationmanipulationvision
H
Hao Su
Department of Computer Science and Engineering, University of California San Diego, La Jolla, CA 92093, USA
Yulun Tian
Yulun Tian
Assistant Professor, University of Michigan
RoboticsSLAMOptimization
N
Nikolay Atanasov
Department of Electrical and Computer Engineering, University of California San Diego, La Jolla, CA 92093, USA