🤖 AI Summary
This study addresses the inefficiency and frequent replanning requirements of traditional coverage path planning in complex maritime environments featuring irregular coastlines, islands, and no-sail zones. To tackle these challenges, the work introduces, for the first time, a critic-free reinforcement learning approach tailored to long-horizon coverage path planning. It proposes an autoregressive path generation method based on a Transformer pointer network, combined with Group Relative Policy Optimization (GRPO), which estimates advantage functions through intra-instance trajectory comparisons to circumvent unstable value estimation. Integrated with hexagonal grid modeling and 2-opt local optimization, the method achieves a 99.0% success rate in generating Hamiltonian paths across 1,000 unseen synthetic nautical charts, yielding paths 7% shorter than those produced by the best heuristic, with 24% fewer turns and inference times under 50 milliseconds—enabling real-time deployment.
📝 Abstract
Maritime surveillance missions, such as search and rescue and environmental monitoring, rely on the efficient allocation of sensing assets over vast and geometrically complex areas. Traditional Coverage Path Planning (CPP) approaches depend on decomposition techniques that struggle with irregular coastlines, islands, and exclusion zones, or require computationally expensive re-planning for every instance. We propose a Deep Reinforcement Learning (DRL) framework to solve CPP on hexagonal grid representations of irregular maritime areas. Unlike conventional methods, we formulate the problem as a neural combinatorial optimization task where a Transformer-based pointer policy autoregressively constructs coverage tours. To overcome the instability of value estimation in long-horizon routing problems, we implement a critic-free Group-Relative Policy Optimization (GRPO) scheme. This method estimates advantages through within-instance comparisons of sampled trajectories rather than relying on a value function. Experiments on 1,000 unseen synthetic maritime environments demonstrate that a trained policy achieves a 99.0% Hamiltonian success rate, more than double the best heuristic (46.0%), while producing paths 7% shorter and with 24% fewer heading changes than the closest baseline. All three inference modes (greedy, stochastic sampling, and sampling with 2-opt refinement) operate under 50~ms per instance on a laptop GPU, confirming feasibility for real-time on-board deployment.