OWMM-Agent: Open World Mobile Manipulation With Multi-modal Agentic Data Synthesis

๐Ÿ“… 2025-06-04
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Open-world mobile manipulation (OWMM) faces three core challenges: instruction openness, environmental dynamics, and tight coupling between high-level decision-making and low-level control. To address these, we propose the first dedicated multimodal agent architecture for OWMM, integrating joint modeling of multi-view visual frames and egocentric state representations. We introduce OWMM-VLM, a domain-specific foundation model that unifies global scene understanding, real-time state tracking, and multimodal action generation. Key innovations include a function-callingโ€“based control mechanism, an instruction-driven active data synthesis pipeline, and domain-adaptive fine-tuning techniques tailored for vision-language models (VLMs). Extensive real-world experiments demonstrate substantial performance gains over general-purpose models such as GPT-4o, achieving state-of-the-art results with strong zero-shot generalization across diverse tasks and environments.

Technology Category

Application Category

๐Ÿ“ Abstract
The rapid progress of navigation, manipulation, and vision models has made mobile manipulators capable in many specialized tasks. However, the open-world mobile manipulation (OWMM) task remains a challenge due to the need for generalization to open-ended instructions and environments, as well as the systematic complexity to integrate high-level decision making with low-level robot control based on both global scene understanding and current agent state. To address this complexity, we propose a novel multi-modal agent architecture that maintains multi-view scene frames and agent states for decision-making and controls the robot by function calling. A second challenge is the hallucination from domain shift. To enhance the agent performance, we further introduce an agentic data synthesis pipeline for the OWMM task to adapt the VLM model to our task domain with instruction fine-tuning. We highlight our fine-tuned OWMM-VLM as the first dedicated foundation model for mobile manipulators with global scene understanding, robot state tracking, and multi-modal action generation in a unified model. Through experiments, we demonstrate that our model achieves SOTA performance compared to other foundation models including GPT-4o and strong zero-shot generalization in real world. The project page is at https://github.com/HHYHRHY/OWMM-Agent
Problem

Research questions and friction points this paper is trying to address.

Generalizing open-world mobile manipulation to diverse instructions and environments
Integrating high-level decision-making with low-level robot control systematically
Mitigating domain shift hallucination in vision-language models for robotics
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-modal agent architecture for decision-making
Agentic data synthesis for domain adaptation
Fine-tuned VLM with unified action generation
๐Ÿ”Ž Similar Papers
No similar papers found.