🤖 AI Summary
This work addresses task-driven multi-robot exploration in unknown environments, where mobile sensor robots collaboratively assist a primary robot to efficiently reach a target. To handle communication-constrained scenarios, we propose a task-oriented uncertainty metric as the reward function—marking the first explicit incorporation of map compression distortion into exploration decision-making. We design a scalable map compression mechanism integrating sparse coding with the information bottleneck principle, and develop a distributed communication–action coordination framework that unifies multi-agent reinforcement learning with distributed consensus optimization. Experiments on realistic map simulations demonstrate significant improvements: target arrival time is substantially reduced, communication overhead decreases by 37%, and performance consistently surpasses baseline methods including information gain and random exploration.
📝 Abstract
This paper investigates the task-driven exploration of unknown environments with mobile sensors communicating compressed measurements. The sensors explore the area and transmit their compressed data to another robot, assisting it in reaching a goal location. We propose a novel communication framework and a tractable multi-agent exploration algorithm to select the sensors' actions. The algorithm uses a task-driven measure of uncertainty, resulting from map compression, as a reward function. We validate the efficacy of our algorithm through numerical simulations conducted on a realistic map and compare it with two alternative approaches. The results indicate that the proposed algorithm effectively decreases the time required for the robot to reach its target without causing excessive load on the communication network.