🤖 AI Summary
Existing learning-based path planning methods suffer from poor generalizability due to reliance on a single map representation. To address this, we propose the first cross-representation unified adaptive information path planning framework. Our method decouples policy learning from map parsing, enabling real-time policy inference over heterogeneous map modalities—including occupancy grids and point clouds—via a lightweight neural network, while seamlessly integrating classical planners to ensure accuracy and interpretability. Evaluated on unseen real-world terrain datasets, our approach matches the performance of state-of-the-art map-specific methods, yet achieves significantly improved cross-terrain and cross-format deployment capability, and supports online replanning under resource constraints. The core contribution is the establishment of the first map-agnostic adaptive information acquisition policy modeling paradigm, advancing learning-based planning toward practical deployment in heterogeneous real-world environments.
📝 Abstract
Robots are frequently tasked to gather relevant sensor data in unknown terrains. A key challenge for classical path planning algorithms used for autonomous information gathering is adaptively replanning paths online as the terrain is explored given limited onboard compute resources. Recently, learning-based approaches emerged that train planning policies offline and enable computationally efficient online replanning performing policy inference. These approaches are designed and trained for terrain monitoring missions assuming a single specific map representation, which limits their applicability to different terrains. To address these issues, we propose a novel formulation of the adaptive informative path planning problem unified across different map representations, enabling training and deploying planning policies in a larger variety of monitoring missions. Experimental results validate that our novel formulation easily integrates with classical non-learning-based planning approaches while maintaining their performance. Our trained planning policy performs similarly to state-of-the-art map-specifically trained policies. We validate our learned policy on unseen real-world terrain datasets.