MORE: Mobile Manipulation Rearrangement Through Grounded Language Reasoning

📅 2025-05-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the low zero-shot planning reliability and severe hallucination issues of foundation models in long-horizon mobile manipulation within large-scale, multi-object scenes, this paper proposes a scene-graph-based active filtering mechanism. Our method constrains open-domain planning to bounded subproblems via instance discrimination and task-relevant subgraph extraction, integrating scene-graph representation, grounded language reasoning, instance-level semantic modeling, and active subgraph filtering to enable robust multimodal foundation model inference. Notably, it is the first approach to support cross-domain (indoor/outdoor) rearrangement without task-specific fine-tuning. On the BEHAVIOR-1K benchmark—comprising 81 diverse rearrangement tasks—our method significantly outperforms existing foundation-model-based approaches. Furthermore, extensive real-world evaluations on complex daily tasks demonstrate strong generalization and practical utility.

Technology Category

Application Category

📝 Abstract
Autonomous long-horizon mobile manipulation encompasses a multitude of challenges, including scene dynamics, unexplored areas, and error recovery. Recent works have leveraged foundation models for scene-level robotic reasoning and planning. However, the performance of these methods degrades when dealing with a large number of objects and large-scale environments. To address these limitations, we propose MORE, a novel approach for enhancing the capabilities of language models to solve zero-shot mobile manipulation planning for rearrangement tasks. MORE leverages scene graphs to represent environments, incorporates instance differentiation, and introduces an active filtering scheme that extracts task-relevant subgraphs of object and region instances. These steps yield a bounded planning problem, effectively mitigating hallucinations and improving reliability. Additionally, we introduce several enhancements that enable planning across both indoor and outdoor environments. We evaluate MORE on 81 diverse rearrangement tasks from the BEHAVIOR-1K benchmark, where it becomes the first approach to successfully solve a significant share of the benchmark, outperforming recent foundation model-based approaches. Furthermore, we demonstrate the capabilities of our approach in several complex real-world tasks, mimicking everyday activities. We make the code publicly available at https://more-model.cs.uni-freiburg.de.
Problem

Research questions and friction points this paper is trying to address.

Enhances language models for zero-shot mobile manipulation planning
Addresses challenges in large-scale object rearrangement tasks
Improves reliability by mitigating hallucinations in robotic reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses scene graphs for environment representation
Incorporates instance differentiation and filtering
Enables cross-environment indoor-outdoor planning
🔎 Similar Papers
No similar papers found.
M
Mohammad Mohammadi
Department of Computer Science, University of Freiburg, Germany; Department of Computer Science, University of Toronto, Canada
Daniel Honerkamp
Daniel Honerkamp
PhD Student, University of Freiburg
Reinforcement LearningRoboticsArtificial IntelligenceEmbodied AIMobile Manipulation
M
Martin Buchner
Department of Computer Science, University of Freiburg, Germany
M
Matteo Cassinelli
Toyota Motor Europe
T
T. Welschehold
Department of Computer Science, University of Freiburg, Germany
F
Fabien Despinoy
Toyota Motor Europe
Igor Gilitschenski
Igor Gilitschenski
Assistant Professor, University of Toronto
RoboticsMachine LearningComputer Vision
A
A. Valada
Department of Computer Science, University of Freiburg, Germany