🤖 AI Summary
This study addresses the challenge of long-term (multi-day) monitoring of the Douro River plume. We propose a multi-AUV cooperative framework featuring: (i) an intermittent communication protocol orchestrated by a central coordinator to reduce energy consumption and communication overhead; (ii) spatiotemporal Gaussian Process Regression (GPR) to model the plume’s dynamic evolution; and (iii) a multi-head Q-network-based multi-agent reinforcement learning (MARL) policy for adaptive navigation and task allocation. Our key innovation is the tight coupling of GPR-based environmental modeling and MARL-based decision-making into a closed-loop modeling-decision architecture, enabling cross-seasonal policy generalization. Evaluated via high-fidelity simulations driven by the Delft3D ocean model, the framework achieves over 100% improvement in operational endurance compared to single- and multi-agent baselines, while simultaneously enhancing monitoring accuracy. Moreover, accuracy remains stable with increasing AUV count, demonstrating scalability and robustness for long-term, large-scale marine environmental monitoring.
📝 Abstract
We study the problem of long-term (multiple days) mapping of a river plume using multiple autonomous underwater vehicles (AUVs), focusing on the Douro river representative use-case. We propose an energy - and communication - efficient multi-agent reinforcement learning approach in which a central coordinator intermittently communicates with the AUVs, collecting measurements and issuing commands. Our approach integrates spatiotemporal Gaussian process regression (GPR) with a multi-head Q-network controller that regulates direction and speed for each AUV. Simulations using the Delft3D ocean model demonstrate that our method consistently outperforms both single- and multi-agent benchmarks, with scaling the number of agents both improving mean squared error (MSE) and operational endurance. In some instances, our algorithm demonstrates that doubling the number of AUVs can more than double endurance while maintaining or improving accuracy, underscoring the benefits of multi-agent coordination. Our learned policies generalize across unseen seasonal regimes over different months and years, demonstrating promise for future developments of data-driven long-term monitoring of dynamic plume environments.