Mobi-$pi$: Mobilizing Your Robot Learning Policy

📅 2025-05-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Visual motor policies trained under constrained camera viewpoints and robot base poses often fail to generalize to unseen environments. To address this, we formulate the “policy mobilization” problem: automatically identifying an optimal base pose in unknown environments that is compatible with a pre-trained policy distribution, thereby decoupling navigation from manipulation without retraining. Method: We introduce the first application of 3D Gaussian splatting for cross-view novel viewpoint synthesis and design a differentiable pose-adaptation scoring function. This is integrated into a sampling-based pose optimization framework. Results & Contribution: Evaluated on RoboCasa simulation and real-robot platforms, our approach significantly improves success rates for fine-grained manipulation tasks—including button pressing and faucet rotation—outperforming existing baselines. Our core contributions are: (1) formalizing the policy mobilization problem; (2) proposing the first end-to-end framework for policy reuse across environments; and (3) establishing a quantitative toolset for analyzing compatibility within policy distributions.

Technology Category

Application Category

📝 Abstract
Learned visuomotor policies are capable of performing increasingly complex manipulation tasks. However, most of these policies are trained on data collected from limited robot positions and camera viewpoints. This leads to poor generalization to novel robot positions, which limits the use of these policies on mobile platforms, especially for precise tasks like pressing buttons or turning faucets. In this work, we formulate the policy mobilization problem: find a mobile robot base pose in a novel environment that is in distribution with respect to a manipulation policy trained on a limited set of camera viewpoints. Compared to retraining the policy itself to be more robust to unseen robot base pose initializations, policy mobilization decouples navigation from manipulation and thus does not require additional demonstrations. Crucially, this problem formulation complements existing efforts to improve manipulation policy robustness to novel viewpoints and remains compatible with them. To study policy mobilization, we introduce the Mobi-$pi$ framework, which includes: (1) metrics that quantify the difficulty of mobilizing a given policy, (2) a suite of simulated mobile manipulation tasks based on RoboCasa to evaluate policy mobilization, (3) visualization tools for analysis, and (4) several baseline methods. We also propose a novel approach that bridges navigation and manipulation by optimizing the robot's base pose to align with an in-distribution base pose for a learned policy. Our approach utilizes 3D Gaussian Splatting for novel view synthesis, a score function to evaluate pose suitability, and sampling-based optimization to identify optimal robot poses. We show that our approach outperforms baselines in both simulation and real-world environments, demonstrating its effectiveness for policy mobilization.
Problem

Research questions and friction points this paper is trying to address.

Improving robot policy generalization to novel positions
Decoupling navigation from manipulation without retraining
Optimizing mobile robot base poses for learned policies
Innovation

Methods, ideas, or system contributions that make the work stand out.

Optimizes robot base pose for policy alignment
Uses 3D Gaussian Splatting for view synthesis
Sampling-based optimization for pose selection