🤖 AI Summary
To address low exploration efficiency and inaccurate global semantic map construction in unknown environments, this paper proposes a hierarchical exploration framework leveraging semantic map prediction. The method integrates a long-term environmental understanding mechanism with a reinforcement learning–driven reward function, iteratively predicting semantic distributions in unobserved regions and guiding exploration path planning via prediction–ground-truth discrepancy. A hierarchical decision-making architecture is further introduced to optimize long-horizon exploration policies. Experiments on standard benchmarks demonstrate that, under identical time budgets, the proposed approach significantly improves map coverage (+12.7%) and semantic mapping accuracy (mIoU +8.3%) over current state-of-the-art methods.
📝 Abstract
In this paper, we propose SEA, a novel approach for active robot exploration through semantic map prediction and a reinforcement learning-based hierarchical exploration policy. Unlike existing learning-based methods that rely on one-step waypoint prediction, our approach enhances the agent's long-term environmental understanding to facilitate more efficient exploration. We propose an iterative prediction-exploration framework that explicitly predicts the missing areas of the map based on current observations. The difference between the actual accumulated map and the predicted global map is then used to guide exploration. Additionally, we design a novel reward mechanism that leverages reinforcement learning to update the long-term exploration strategies, enabling us to construct an accurate semantic map within limited steps. Experimental results demonstrate that our method significantly outperforms state-of-the-art exploration strategies, achieving superior coverage ares of the global map within the same time constraints.