Benchmarking Visual Feature Representations for LiDAR-Inertial-Visual Odometry Under Challenging Conditions

📅 2026-03-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes a hybrid LiDAR-inertial-visual odometry approach that systematically integrates direct photometric methods with diverse learned and traditional feature descriptors—including ORB, SuperPoint, and XFeat—alongside their corresponding matching strategies such as Hamming distance, SuperGlue, and LightGlue, building upon the FAST-LIVO2 framework. By synergistically combining direct and feature-based paradigms, the method effectively mitigates the failure of pure direct methods in regions with inconsistent illumination, while maintaining low computational overhead. The resulting system significantly enhances feature tracking stability and localization accuracy, demonstrating robust and reliable state estimation even under challenging conditions such as low light, overexposure, varying illumination, and high scene disparity.

Technology Category

Application Category

📝 Abstract
Accurate localization in autonomous driving is critical for successful missions including environmental mapping and survivor searches. In visually challenging environments, including low-light conditions, overexposure, illumination changes, and high parallax, the performance of conventional visual odometry methods significantly degrade undermining robust robotic navigation. Researchers have recently proposed LiDAR-inertial-visual odometry (LIVO) frameworks, that integrate LiDAR, IMU, and camera sensors, to address these challenges. This paper extends the FAST-LIVO2-based framework by introducing a hybrid approach that integrates direct photometric methods with descriptor-based feature matching. For the descriptor-based feature matching, this work proposes pairs of ORB with the Hamming distance, SuperPoint with SuperGlue, SuperPoint with LightGlue, and XFeat with the mutual nearest neighbor. The proposed configurations are benchmarked by accuracy, computational cost, and feature tracking stability, enabling a quantitative comparison of the adaptability and applicability of visual descriptors. The experimental results reveal that the proposed hybrid approach outperforms the conventional sparse-direct method. Although the sparse-direct method often fails to converge in regions where photometric inconsistency arises due to illumination changes, the proposed approach still maintains robust performance under the same conditions. Furthermore, the hybrid approach with learning-based descriptors enables robust and reliable visual state estimation across challenging environments.
Problem

Research questions and friction points this paper is trying to address.

visual odometry
challenging environments
illumination changes
LiDAR-inertial-visual odometry
feature representation
Innovation

Methods, ideas, or system contributions that make the work stand out.

hybrid visual odometry
feature descriptor benchmarking
LiDAR-inertial-visual odometry
learning-based descriptors
photometric consistency
🔎 Similar Papers
No similar papers found.
E
Eunseon Choi
Department of Convergence IT Engineering, Pohang University of Science and Technology (POSTECH), Pohang 37673, Republic of Korea
J
Junwoo Hong
Department of Convergence IT Engineering, Pohang University of Science and Technology (POSTECH), Pohang 37673, Republic of Korea
D
Daehan Lee
Department of Convergence IT Engineering, Pohang University of Science and Technology (POSTECH), Pohang 37673, Republic of Korea
Sanghyun Park
Sanghyun Park
Professor of Computer Science, Yonsei University
DatabaseData miningBioinformaticsArtificial intelligence
H
Hyunyoung Jo
Department of Convergence IT Engineering, Pohang University of Science and Technology (POSTECH), Pohang 37673, Republic of Korea
S
Sunyoung Kim
School of Mechanical Engineering, Kunsan National University, Jeonbuk 54150, Republic of Korea
C
Changho Kang
Department of Artificial Intelligence and Robotics, Sejong University, Seoul 05006, Republic of Korea
S
Seongsam Kim
Disaster Scientific Investigation Division, Disaster Investigation Technology Team, National Disaster Management Research Institute, Ministry of the Interior and Safety, Ulsan 44429, Republic of Korea
Yonghan Jung
Yonghan Jung
Purdue University
Causal InferenceSemiparametric InferenceExplainable AIReinforcement Learning
J
Jungwook Park
Disaster Scientific Investigation Division, Disaster Investigation Technology Team, National Disaster Management Research Institute, Ministry of the Interior and Safety, Ulsan 44429, Republic of Korea
S
Seul Koo
Disaster Scientific Investigation Division, Disaster Investigation Technology Team, National Disaster Management Research Institute, Ministry of the Interior and Safety, Ulsan 44429, Republic of Korea
Soohee Han
Soohee Han
Professor of Electrical Engineering and Convergence IT Engineering, POSTECH
Reinforcement learningMathematical InstrumentationBattery informatics