GRL-SNAM: Geometric Reinforcement Learning with Path Differential Hamiltonians for Simultaneous Navigation and Mapping in Unknown Environments

📅 2025-12-31
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes a multi-agent cooperative Simultaneous Navigation and Mapping (SNAM) approach in unknown environments without prior maps, relying solely on local perception. By formulating navigation and mapping as a dynamic shortest-path discovery process under controlled Hamiltonian optimization, the method constructs local energy landscapes and iteratively updates an adaptive Hamiltonian to jointly optimize perception, planning, and reconstruction strategies. Innovatively integrating geometric reinforcement learning with a path-differential Hamiltonian framework, the approach dispenses with global map construction and instead employs the Hamiltonian as a scoring function to enable efficient online mapping and navigation. Experiments demonstrate that the method outperforms both local reactive and global-policy baselines in 2D unknown environments, achieving safe inter-agent spacing and generalizing to unseen layouts with minimal exploration cost.

Technology Category

Application Category

📝 Abstract
We present GRL-SNAM, a geometric reinforcement learning framework for Simultaneous Navigation and Mapping(SNAM) in unknown environments. A SNAM problem is challenging as it needs to design hierarchical or joint policies of multiple agents that control the movement of a real-life robot towards the goal in mapless environment, i.e. an environment where the map of the environment is not available apriori, and needs to be acquired through sensors. The sensors are invoked from the path learner, i.e. navigator, through active query responses to sensory agents, and along the motion path. GRL-SNAM differs from preemptive navigation algorithms and other reinforcement learning methods by relying exclusively on local sensory observations without constructing a global map. Our approach formulates path navigation and mapping as a dynamic shortest path search and discovery process using controlled Hamiltonian optimization: sensory inputs are translated into local energy landscapes that encode reachability, obstacle barriers, and deformation constraints, while policies for sensing, planning, and reconfiguration evolve stagewise via updating Hamiltonians. A reduced Hamiltonian serves as an adaptive score function, updating kinetic/potential terms, embedding barrier constraints, and continuously refining trajectories as new local information arrives. We evaluate GRL-SNAM on two different 2D navigation tasks. Comparing against local reactive baselines and global policy learning references under identical stagewise sensing constraints, it preserves clearance, generalizes to unseen layouts, and demonstrates that Geometric RL learning via updating Hamiltonians enables high-quality navigation through minimal exploration via local energy refinement rather than extensive global mapping. The code is publicly available on \href{https://github.com/CVC-Lab/GRL-SNAM}{Github}.
Problem

Research questions and friction points this paper is trying to address.

Simultaneous Navigation and Mapping
unknown environments
mapless navigation
local sensory observations
multi-agent coordination
Innovation

Methods, ideas, or system contributions that make the work stand out.

Geometric Reinforcement Learning
Hamiltonian Optimization
Simultaneous Navigation and Mapping
Local Energy Landscape
Mapless Navigation