🤖 AI Summary
Addressing the challenge of simultaneously achieving lightweight design, high efficiency, and robustness in vision-based navigation for mobile robots, this paper proposes a lightweight teaching–replay navigation method that eliminates the need for precise pose estimation or dense mapping. Our approach comprises three key contributions: (1) a qualitative mapping model between visual feature flow and robot motion, reformulating replay navigation as a feature-flow minimization problem; (2) automatic keyframe selection and sparse graph construction via feature-flow analysis, enabling efficient path representation and retrieval; and (3) integration of probabilistic motion planning to ensure stable navigation without accurate localization. Experiments on real mobile platforms demonstrate substantial improvements over mainstream baselines—achieving real-time performance (<15 ms per frame) and strong environmental robustness under varying lighting, viewpoint, and structural conditions. The implementation is publicly available.
📝 Abstract
Though visual and repeat navigation is a convenient solution for mobile robot self-navigation, achieving balance between efficiency and robustness in task environment still remains challenges. In this paper, we propose a novel visual and repeat robotic autonomous navigation method that requires no accurate localization and dense reconstruction modules, which makes our system featured by lightweight and robustness. Firstly, feature flow is introduced and we develop a qualitative mapping between feature flow and robot's motion, in which feature flow is defined as pixel location bias between matched features. Based on the mapping model, the map outputted by the teaching phase is represented as a keyframe graph, in which the feature flow on the edge encodes the relative motion between adjacent keyframes. Secondly, the visual repeating navigation is essentially modeled as a feature flow minimization problem between current observation and the map keyframe. To drive the robot to consistently reduce the feature flow between current frame and map keyframes without accurate localization, a probabilistic motion planning is developed based on our qualitative feature flow-motion mapping indicator. Extensive experiments using our mobile platform demonstrates that our proposed method is lightweight, robust, and superior to baselines. The source code has been made public at https://github.com/wangjks/FFI-VTR to benefit the community.