🤖 AI Summary
Mobile service robots face core challenges in embodied intelligence deployment, including multimodal fusion, uncertain decision-making, task generalization, and human–robot interaction (HRI). This paper systematically reviews recent advances in leveraging foundation models—specifically large language models (LLMs), vision-language models (VLMs), multimodal large language models (MLLMs), and vision-language-action models (VLAMs)—for embodied intelligence, with emphasis on critical bottlenecks within the perception–reasoning–action loop. We propose the first integrated foundation model framework tailored for mobile service robots, introducing three novel research directions: predictive scaling laws, cross-embodied generalization, and long-term autonomous adaptation. By unifying vision–language–action modeling, real-time sensor fusion, and human-centered HRI techniques, we validate the framework’s robustness, scalability, and practical deployability across home assistance, healthcare support, and service automation scenarios.
📝 Abstract
Rapid advancements in foundation models, including Large Language Models, Vision-Language Models, Multimodal Large Language Models, and Vision-Language-Action Models have opened new avenues for embodied AI in mobile service robotics. By combining foundation models with the principles of embodied AI, where intelligent systems perceive, reason, and act through physical interactions, robots can improve understanding, adapt to, and execute complex tasks in dynamic real-world environments. However, embodied AI in mobile service robots continues to face key challenges, including multimodal sensor fusion, real-time decision-making under uncertainty, task generalization, and effective human-robot interactions (HRI). In this paper, we present the first systematic review of the integration of foundation models in mobile service robotics, identifying key open challenges in embodied AI and examining how foundation models can address them. Namely, we explore the role of such models in enabling real-time sensor fusion, language-conditioned control, and adaptive task execution. Furthermore, we discuss real-world applications in the domestic assistance, healthcare, and service automation sectors, demonstrating the transformative impact of foundation models on service robotics. We also include potential future research directions, emphasizing the need for predictive scaling laws, autonomous long-term adaptation, and cross-embodiment generalization to enable scalable, efficient, and robust deployment of foundation models in human-centric robotic systems.