FLAF: Focal Line and Feature-constrained Active View Planning for Visual Teach and Repeat

📅 2024-09-05
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the critical challenge of feature tracking failure in texture-deficient regions of man-made environments—thereby limiting the practicality of visual navigation—this paper proposes an active viewpoint planning method that jointly leverages focus-line geometry and feature discriminability constraints, integrated into a feature-based visual teaching–reproduction system. For the first time, focus-line geometric constraints and map-point discriminability are co-modeled to enable task-specific, proactive viewpoint control during both teaching and reproduction phases, overcoming the robustness limitations of conventional passive-viewpoint tracking in low-texture areas. The method integrates PTU (pan-tilt-unit) control, feature-based VSLAM, focus-line projection geometry modeling, discriminability quantification, and online optimization. Experiments demonstrate a 32% improvement in mapping completeness and a reproduction localization success rate exceeding 96%, significantly enhancing navigation reliability in real-world complex environments.

Technology Category

Application Category

📝 Abstract
This paper presents FLAF, a focal line and feature-constrained active view planning method for tracking failure avoidance in feature-based visual navigation of mobile robots. Our FLAF-based visual navigation is built upon a feature-based visual teach and repeat (VT&R) framework, which supports many robotic applications by teaching a robot to navigate on various paths that cover a significant portion of daily autonomous navigation requirements. However, tracking failure in feature-based visual simultaneous localization and mapping (VSLAM) caused by textureless regions in human-made environments is still limiting VT&R to be adopted in the real world. To address this problem, the proposed view planner is integrated into a feature-based visual SLAM system to build up an active VT&R system that avoids tracking failure. In our system, a pan-tilt unit (PTU)-based active camera is mounted on the mobile robot. Using FLAF, the active camera-based VSLAM operates during the teaching phase to construct a complete path map and in the repeat phase to maintain stable localization. FLAF orients the robot toward more map points to avoid mapping failures during path learning and toward more feature-identifiable map points beneficial for localization while following the learned trajectory. Experiments in real scenarios demonstrate that FLAF outperforms the methods that do not consider feature-identifiability, and our active VT&R system performs well in complex environments by effectively dealing with low-texture regions.
Problem

Research questions and friction points this paper is trying to address.

Avoid tracking failure in visual navigation
Enhance feature-based visual SLAM
Improve VT&R in low-texture environments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Focal line view planning
Feature-constrained navigation
Active VT&R system
🔎 Similar Papers
No similar papers found.
C
Changfei Fu
Shenzhen Key Laboratory of Robotics and Computer Vision, Southern University of Science and Technology (SUSTech), and the Department of Electrical and Electronic Engineering, SUSTech, Shenzhen, China; Peng Cheng National Laboratory, Shenzhen, China
Weinan Chen
Weinan Chen
Guangdong University of Technology
Mobile RobotSLAM
Wenjun Xu
Wenjun Xu
Peng Cheng Laboratory
machine learningreinforcement learningflexible/soft robot
H
Hong Zhang
Shenzhen Key Laboratory of Robotics and Computer Vision, Southern University of Science and Technology (SUSTech), and the Department of Electrical and Electronic Engineering, SUSTech, Shenzhen, China