Embodied AI with Foundation Models for Mobile Service Robots: A Systematic Review

📅 2025-05-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Mobile service robots face core challenges in embodied intelligence deployment, including multimodal fusion, uncertain decision-making, task generalization, and human–robot interaction (HRI). This paper systematically reviews recent advances in leveraging foundation models—specifically large language models (LLMs), vision-language models (VLMs), multimodal large language models (MLLMs), and vision-language-action models (VLAMs)—for embodied intelligence, with emphasis on critical bottlenecks within the perception–reasoning–action loop. We propose the first integrated foundation model framework tailored for mobile service robots, introducing three novel research directions: predictive scaling laws, cross-embodied generalization, and long-term autonomous adaptation. By unifying vision–language–action modeling, real-time sensor fusion, and human-centered HRI techniques, we validate the framework’s robustness, scalability, and practical deployability across home assistance, healthcare support, and service automation scenarios.

Technology Category

Application Category

📝 Abstract
Rapid advancements in foundation models, including Large Language Models, Vision-Language Models, Multimodal Large Language Models, and Vision-Language-Action Models have opened new avenues for embodied AI in mobile service robotics. By combining foundation models with the principles of embodied AI, where intelligent systems perceive, reason, and act through physical interactions, robots can improve understanding, adapt to, and execute complex tasks in dynamic real-world environments. However, embodied AI in mobile service robots continues to face key challenges, including multimodal sensor fusion, real-time decision-making under uncertainty, task generalization, and effective human-robot interactions (HRI). In this paper, we present the first systematic review of the integration of foundation models in mobile service robotics, identifying key open challenges in embodied AI and examining how foundation models can address them. Namely, we explore the role of such models in enabling real-time sensor fusion, language-conditioned control, and adaptive task execution. Furthermore, we discuss real-world applications in the domestic assistance, healthcare, and service automation sectors, demonstrating the transformative impact of foundation models on service robotics. We also include potential future research directions, emphasizing the need for predictive scaling laws, autonomous long-term adaptation, and cross-embodiment generalization to enable scalable, efficient, and robust deployment of foundation models in human-centric robotic systems.
Problem

Research questions and friction points this paper is trying to address.

Challenges in multimodal sensor fusion for mobile robots
Real-time decision-making under uncertainty in dynamic environments
Enhancing human-robot interactions through foundation models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrating foundation models with embodied AI
Real-time sensor fusion and decision-making
Language-conditioned control for task execution
🔎 Similar Papers
No similar papers found.