Goal-Conditioned Reinforcement Learning for Data-Driven Maritime Navigation

📅 2025-09-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address poor generalization and inadequate adaptability to multiple source-destination (S-D) pairs in vessel path planning within confined dynamic waterways, this paper proposes a goal-oriented reinforcement learning framework. Methodologically, it integrates AIS traffic data with ERA5 wind field information to construct a dynamic hexagonal grid-based state representation; designs a target-conditioned multi-discrete action space; incorporates invalid-action masking and positive reward shaping; and employs a recurrent-PPO algorithm for policy optimization. Empirical evaluation in the Gulf of St. Lawrence demonstrates that action masking significantly accelerates convergence and enhances policy stability, while reward shaping reduces fuel consumption (−12.3%) and voyage time (−8.7%), all while preserving path diversity. The core contribution is the first realization of adaptive maritime decision-making for multiple S-D pairs, jointly driven by large-scale traffic data and physical environmental constraints.

Technology Category

Application Category

📝 Abstract
Routing vessels through narrow and dynamic waterways is challenging due to changing environmental conditions and operational constraints. Existing vessel-routing studies typically fail to generalize across multiple origin-destination pairs and do not exploit large-scale, data-driven traffic graphs. In this paper, we propose a reinforcement learning solution for big maritime data that can learn to find a route across multiple origin-destination pairs while adapting to different hexagonal grid resolutions. Agents learn to select direction and speed under continuous observations in a multi-discrete action space. A reward function balances fuel efficiency, travel time, wind resistance, and route diversity, using an Automatic Identification System (AIS)-derived traffic graph with ERA5 wind fields. The approach is demonstrated in the Gulf of St. Lawrence, one of the largest estuaries in the world. We evaluate configurations that combine Proximal Policy Optimization with recurrent networks, invalid-action masking, and exploration strategies. Our experiments demonstrate that action masking yields a clear improvement in policy performance and that supplementing penalty-only feedback with positive shaping rewards produces additional gains.
Problem

Research questions and friction points this paper is trying to address.

Optimizing vessel routes in dynamic waterways with environmental constraints
Generalizing navigation across multiple origin-destination pairs using data
Balancing fuel efficiency, travel time, and weather resistance in maritime navigation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Goal-Conditioned Reinforcement Learning for maritime navigation
Multi-discrete action space with continuous observations
AIS-derived traffic graph with ERA5 wind integration
🔎 Similar Papers
No similar papers found.
V
Vaishnav Vaidheeswaran
Faculty of Computer Science, Dalhousie University, Halifax, Canada
D
Dilith Jayakody
Faculty of Computer Science, Dalhousie University, Halifax, Canada
S
Samruddhi Mulay
Faculty of Computer Science, Dalhousie University, Halifax, Canada
A
Anand Lo
Faculty of Computer Science, Dalhousie University, Halifax, Canada
M
Md Mahbub Alam
Faculty of Computer Science, Dalhousie University, Halifax, Canada
Gabriel Spadon
Gabriel Spadon
Assistant Professor, Faculty of Computer Science, Dalhousie University
Data MiningMachine LearningNetwork ScienceGeoinformatics