IndoorR2X: Indoor Robot-to-Everything Coordination with LLM-Driven Planning

📅 2026-03-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the high exploration cost and low collaboration efficiency in indoor multi-robot systems caused by partial observability. To overcome these limitations, the authors propose integrating data from mobile robots and static Internet-of-Things (IoT) sensors to construct a global semantic state representation, which is then leveraged by a large language model (LLM) for high-level task planning, enabling efficient Robot-to-Everything (R2X) collaboration. The study presents the first LLM-driven benchmark and simulation framework for indoor R2X coordination, unifying multi-robot systems, IoT sensing networks, and semantic world modeling. Experimental results demonstrate that the IoT-enhanced semantic world model significantly improves the efficiency and reliability of multi-robot task execution.

Technology Category

Application Category

📝 Abstract
Although robot-to-robot (R2R) communication improves indoor scene understanding beyond what a single robot can achieve, R2R alone cannot overcome partial observability without substantial exploration overhead or scaling team size. In contrast, many indoor environments already include low-cost Internet of Things (IoT) sensors (e.g., cameras) that provide persistent, building-wide context beyond onboard perception. We therefore introduce IndoorR2X, the first benchmark and simulation framework for Large Language Model (LLM)-driven multi-robot task planning with Robot-to-Everything (R2X) perception and communication in indoor environments. IndoorR2X integrates observations from mobile robots and static IoT devices to construct a global semantic state that supports scalable scene understanding, reduces redundant exploration, and enables high-level coordination through LLM-based planning. IndoorR2X provides configurable simulation environments, sensor layouts, robot teams, and task suites to systematically evaluate high-level semantic coordination strategies. Extensive experiments across diverse settings demonstrate that IoT-augmented world modeling improves multi-robot efficiency and reliability, and we highlight key insights and failure modes for advancing LLM-based collaboration between robot teams and indoor IoT sensors.
Problem

Research questions and friction points this paper is trying to address.

Robot-to-Everything
indoor environments
multi-robot coordination
partial observability
IoT sensors
Innovation

Methods, ideas, or system contributions that make the work stand out.

Robot-to-Everything (R2X)
Large Language Model (LLM)
IoT-augmented perception
multi-robot coordination
semantic scene understanding
🔎 Similar Papers
No similar papers found.