Is FISHER All You Need in The Multi-AUV Underwater Target Tracking Task?

📅 2024-12-05
🏛️ arXiv.org
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
In multi-AUV cooperative underwater target tracking, reinforcement learning (RL) suffers from excessive environmental interaction requirements, challenging reward engineering, and poor generalization of conventional controllers. To address these issues, this paper proposes FISHER, a two-stage imitation learning framework. Its key contributions are: (1) the first multi-agent adversarial imitation learning approach grounded in Nash equilibrium; (2) an independent generalized decision transformer enabling knowledge transfer from classical controllers to RL policies; and (3) a sim-to-sim demonstration generation pipeline coupled with a multi-agent discriminator–actor–critic architecture. Extensive simulations across diverse scenarios demonstrate that FISHER significantly improves tracking accuracy and coordination efficiency, while exhibiting strong robustness, cross-task generalization capability, and runtime stability.

Technology Category

Application Category

📝 Abstract
It is significant to employ multiple autonomous underwater vehicles (AUVs) to execute the underwater target tracking task collaboratively. However, it's pretty challenging to meet various prerequisites utilizing traditional control methods. Therefore, we propose an effective two-stage learning from demonstrations training framework, FISHER, to highlight the adaptability of reinforcement learning (RL) methods in the multi-AUV underwater target tracking task, while addressing its limitations such as extensive requirements for environmental interactions and the challenges in designing reward functions. The first stage utilizes imitation learning (IL) to realize policy improvement and generate offline datasets. To be specific, we introduce multi-agent discriminator-actor-critic based on improvements of the generative adversarial IL algorithm and multi-agent IL optimization objective derived from the Nash equilibrium condition. Then in the second stage, we develop multi-agent independent generalized decision transformer, which analyzes the latent representation to match the future states of high-quality samples rather than reward function, attaining further enhanced policies capable of handling various scenarios. Besides, we propose a simulation to simulation demonstration generation procedure to facilitate the generation of expert demonstrations in underwater environments, which capitalizes on traditional control methods and can easily accomplish the domain transfer to obtain demonstrations. Extensive simulation experiments from multiple scenarios showcase that FISHER possesses strong stability, multi-task performance and capability of generalization.
Problem

Research questions and friction points this paper is trying to address.

Proposes FISHER framework for multi-AUV underwater target tracking
Addresses reinforcement learning limitations in environmental interaction requirements
Solves challenges in designing reward functions for collaborative AUV control
Innovation

Methods, ideas, or system contributions that make the work stand out.

Two-stage learning from demonstrations training framework
Multi-agent discriminator-actor-critic with Nash equilibrium optimization
Generalized decision transformer analyzing latent representations of states
🔎 Similar Papers
2024-07-11IEEE/RJS International Conference on Intelligent RObots and SystemsCitations: 2
J
Jingzehua Xu
Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, 518000, China
Guanwen Xie
Guanwen Xie
Tsinghua University
Reinforcement learning
Z
Ziqi Zhang
School of Engineering, WestLake University, Zhejiang, 310030, China
Xiangwang Hou
Xiangwang Hou
Department of EE, Tsinghua University
Wireless Federated LearningEdge IntelligenceUAV/AUV Swarm
D
Dongfang Ma
Ocean College, Zhejiang University, Zhoushan, 316000, China
S
Shuai Zhang
Department of Data Science, New Jersey Institute of Technology, State of New Jersey, 07450, USA
Yong Ren
Yong Ren
Institute of Automation, Chinese Academy of Sciences
Speech CodecText-to-speechVideo-to-audioMLLMContinual Learning
D
D. Niyato
College of Computing and Data Science, Nanyang Technological University, Singapore