AirSLAM: An Efficient and Illumination-Robust Point-Line Visual SLAM System

📅 2024-08-07
🏛️ IEEE Transactions on robotics
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
To address the degradation of robustness and efficiency in visual SLAM under short- and long-term illumination variations, this paper proposes a point-line joint, illumination-robust SLAM system. Methodologically, it introduces: (1) the first unified CNN architecture that simultaneously extracts keypoints and structural line features; (2) a lightweight relocalization module leveraging prebuilt map points, lines, and structural graphs for rapid pose recovery; and (3) an end-to-end coupled optimization framework integrating front-end geometric constraints with back-end graph optimization (g2o/Ceres). The system is accelerated via TensorRT, achieving 73 Hz on PC and 40 Hz on embedded platforms. Evaluated on multiple illumination-challenging datasets, it significantly outperforms state-of-the-art methods. The source code is publicly available.

Technology Category

Application Category

📝 Abstract
In this paper, we present an efficient visual SLAM system designed to tackle both short-term and long-term illumination challenges. Our system adopts a hybrid approach that combines deep learning techniques for feature detection and matching with traditional backend optimization methods. Specifically, we propose a unified convolutional neural network (CNN) that simultaneously extracts keypoints and structural lines. These features are then associated, matched, triangulated, and optimized in a coupled manner. Additionally, we introduce a lightweight relocalization pipeline that reuses the built map, where keypoints, lines, and a structure graph are used to match the query frame with the map. To enhance the applicability of the proposed system to real-world robots, we deploy and accelerate the feature detection and matching networks using C++ and NVIDIA TensorRT. Extensive experiments conducted on various datasets demonstrate that our system outperforms other state-of-the-art visual SLAM systems in illumination-challenging environments. Efficiency evaluations show that our system can run at a rate of 73Hz on a PC and 40Hz on an embedded platform. Our implementation is open-sourced: https://github.com/sair-lab/AirSLAM.
Problem

Research questions and friction points this paper is trying to address.

Develops illumination-robust visual SLAM system
Combines deep learning with backend optimization
Enhances efficiency and relocalization for robotics
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hybrid deep learning and optimization
Unified CNN for feature extraction
Lightweight relocalization pipeline
🔎 Similar Papers
No similar papers found.
Kuan Xu
Kuan Xu
Nanyang Technological University
roboticsvisual SLAM
Y
Y. Hao
Spatial AI & Robotics Lab, Department of Computer Science and Engineering, University at Buffalo, Buffalo, NY
S
Shenghai Yuan
Centre for Advanced Robotics Technology Innovation (CARTIN), School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore
C
Chen Wang
Spatial AI & Robotics Lab, Department of Computer Science and Engineering, University at Buffalo, Buffalo, NY
Lihua Xie
Lihua Xie
Professor of Electrical Engineering, Nanyang Technological University
Robust controlNetworked ControlMult-agent Systems