🤖 AI Summary
Underwater multi-robot collaborative coverage faces challenges including partial observability, communication constraints, absence of GPS-based localization, and environmental uncertainty. To address these, this paper proposes a semantic-guided fuzzy control framework: (1) a lightweight large language model compresses multimodal sensor observations into interpretable semantic tokens; (2) a semantic communication–based distributed coordination mechanism enables intent sharing and spatial task allocation; and (3) a fuzzy inference system integrates semantic inputs with predefined membership functions to generate robust, adaptive control commands. Evaluated in an unknown coral-reef-like environment, the framework achieves a 23.6% improvement in coverage efficiency and a 41.2% reduction in redundant exploration compared to baseline methods. It significantly enhances goal-directed navigation robustness and collaborative adaptability under map-free, GPS-denied conditions.
📝 Abstract
Underwater multi-robot cooperative coverage remains challenging due to partial observability, limited communication, environmental uncertainty, and the lack of access to global localization. To address these issues, this paper presents a semantics-guided fuzzy control framework that couples Large Language Models (LLMs) with interpretable control and lightweight coordination. Raw multimodal observations are compressed by the LLM into compact, human-interpretable semantic tokens that summarize obstacles, unexplored regions, and Objects Of Interest (OOIs) under uncertain perception. A fuzzy inference system with pre-defined membership functions then maps these tokens into smooth and stable steering and gait commands, enabling reliable navigation without relying on global positioning. Then, we further coordinate multiple robots by introducing semantic communication that shares intent and local context in linguistic form, enabling agreement on who explores where while avoiding redundant revisits. Extensive simulations in unknown reef-like environments show that, under limited sensing and communication, the proposed framework achieves robust OOI-oriented navigation and cooperative coverage with improved efficiency and adaptability, narrowing the gap between semantic cognition and distributed underwater control in GPS-denied, map-free conditions.