🤖 AI Summary
This work addresses the challenge that existing mobile construction robots struggle to handle unexpected, non-predefined ad-hoc tasks on-site, whose locations, timing, and contextual cues are unknown in advance. To this end, the authors propose an AI agent based on a collaborative multi-large multimodal model (LMM) architecture, which—deployed for the first time on a quadrupedal robotic platform—integrates parallel LMM modules dedicated to natural language task parsing, construction blueprint navigation, and visual localization. This integration enables real-time identification of ad-hoc task locations and autonomous response. In three real-world scenarios, the system achieved a 92.2% success rate in task location recognition and positioning, significantly enhancing the autonomy and adaptability of construction robots in executing unplanned tasks.
📝 Abstract
Due to the ever-changing nature of construction, many tasks on sites occur in an improvisational manner. Existing mobile construction robot studies remain limited in addressing improvisational tasks, where task-required locations, timing of task occurrence, and contextual information required for task execution are not known in advance. We propose an agent that understands improvisational tasks given in natural language, identifies the task-required location, and positions itself. The agent's functionality was decomposed into three Large Multimodal Model (LMM) modules operating in parallel, enabling the application of LMMs for task interpretation and breakdown, construction drawing-based navigation, and visual reasoning to identify non-predefined task-required locations. The agent was implemented with a quadruped robot and achieved a 92.2% success rate for identifying and positioning at task-required locations across three tests designed to assess improvisational task handling. This study enables mobile construction robots to perform non-predefined tasks autonomously.