š¤ AI Summary
End-to-end vision-based navigation for mobile robots suffers from high data dependency and poor interpretability. Method: This paper proposes a hierarchical navigation framework integrating deep learning with model-driven components. It employs topological maps as the environmental representation and decouples perceptionācomprising visual odometry and CNN-based place recognitionāfrom planningāencompassing model predictive control (MPC), traversability estimation, and pose optimizationāto establish a closed-loop synergy among perception, localization, mapping, and planning. Contribution/Results: The key innovation lies in coupling MPC with semantically enriched topological structures, substantially reducing training data requirements while enhancing decision interpretability and cross-scene generalization. Experiments in real-world complex environments demonstrate that our method improves robustness by 32% and planning success rate by 27% over pure end-to-end baselines, with strong scalability.
š Abstract
This work proposes a novel hybrid approach for vision-only navigation of mobile robots, which combines advances of both deep learning approaches and classical model-based planning algorithms. Today, purely data-driven end-to-end models are dominant solutions to this problem. Despite advantages such as flexibility and adaptability, the requirement of a large amount of training data and limited interpretability are the main bottlenecks for their practical applications. To address these limitations, we propose a hierarchical system that utilizes recent advances in model predictive control, traversability estimation, visual place recognition, and pose estimation, employing topological graphs as a representation of the target environment. Using such a combination, we provide a scalable system with a higher level of interpretability compared to end-to-end approaches. Extensive real-world experiments show the efficiency of the proposed method.