๐ค AI Summary
In long, straight indoor corridors, particle-filter-based 2D-SLAM suffers from severe degeneracy, leading to pose estimation drift and failure. To address this, we propose DOA (Degeneracy-Oriented Adaptive agent), a deep reinforcement learningโbased adaptive optimization framework. DOA dynamically adjusts multi-sensor fusion weights to compensate for pose degradation, introducing a novel degeneracy-aware reward function and a degeneracy-factor-driven linear interpolation mechanism to jointly optimize perception and state estimation. We train the agent using the Proximal Policy Optimization (PPO) algorithm and incorporate a transfer learning module to enhance cross-environment generalization; additionally, we apply dynamic observation distribution shifting to strengthen motion model dominance. Experimental results demonstrate that DOA significantly outperforms existing methods in degeneracy detection accuracy, pose optimization quality, and localization robustness under challenging degenerate conditions.
๐ Abstract
Particle filter-based 2D-SLAM is widely used in indoor localization tasks due to its efficiency. However, indoor environments such as long straight corridors can cause severe degeneracy problems in SLAM. In this paper, we use Proximal Policy Optimization (PPO) to train an adaptive degeneracy optimization agent (DOA) to address degeneracy problem. We propose a systematic methodology to address three critical challenges in traditional supervised learning frameworks: (1) data acquisition bottlenecks in degenerate dataset, (2) inherent quality deterioration of training samples, and (3) ambiguity in annotation protocol design. We design a specialized reward function to guide the agent in developing perception capabilities for degenerate environments. Using the output degeneracy factor as a reference weight, the agent can dynamically adjust the contribution of different sensors to pose optimization. Specifically, the observation distribution is shifted towards the motion model distribution, with the step size determined by a linear interpolation formula related to the degeneracy factor. In addition, we employ a transfer learning module to endow the agent with generalization capabilities across different environments and address the inefficiency of training in degenerate environments. Finally, we conduct ablation studies to demonstrate the rationality of our model design and the role of transfer learning. We also compare the proposed DOA with SOTA methods to prove its superior degeneracy detection and optimization capabilities across various environments.