🤖 AI Summary
To address insufficient exploration and sampling bias in RL-based meta-planners—stemming from the limited capabilities of underlying classical planners—this paper proposes a diagnosis-mitigation co-design framework. We introduce the first behavior-guided exploration bottleneck diagnosis mechanism, integrating behavioral cloning analysis, exploration coverage metrics, and bottleneck data identification. Based on this diagnosis, we design a targeted upsampling strategy enabling reweighted data augmentation. The framework is plug-and-play and compatible with diverse RL meta-planners. Experiments demonstrate >13.5% performance improvement in navigation tasks, significantly enhanced out-of-distribution environmental robustness, and a 4× speedup in training convergence. Our core contribution lies in the first systematic integration of interpretable diagnostic analysis with goal-directed sampling mitigation, effectively alleviating under-representation of critical-state data regions.
📝 Abstract
Robot navigation is increasingly crucial across applications like delivery services and warehouse management. The integration of Reinforcement Learning (RL) with classical planning has given rise to meta-planners that combine the adaptability of RL with the explainable decision-making of classical planners. However, the exploration capabilities of RL-based meta-planners during training are often constrained by the capabilities of the underlying classical planners. This constraint can result in limited exploration, thereby leading to sampling skew issues. To address these issues, our paper introduces a novel framework, DIGIMON, which begins with behavior-guided diagnosis for exploration bottlenecks within the meta-planner and follows up with a mitigation strategy that conducts up-sampling from diagnosed bottleneck data. Our evaluation shows 13.5%+ improvement in navigation performance, greater robustness in out-of-distribution environments, and a 4x boost in training efficiency. DIGIMON is designed as a versatile, plug-and-play solution, allowing seamless integration into various RL-based meta-planners.