LVI-Q: Robust LiDAR-Visual-Inertial-Kinematic Odometry for Quadruped Robots Using Tightly-Coupled and Efficient Alternating Optimization

šŸ“… 2025-10-16
šŸ“ˆ Citations: 0
✨ Influential: 0
šŸ“„ PDF
šŸ¤– AI Summary
To address pose estimation drift in SLAM for quadrupedal robots operating in complex dynamic environments, this paper proposes a tightly coupled LiDAR–visual–inertial–kinematic multi-sensor odometry system. Methodologically, it establishes a dual-branch alternating optimization framework—comprising visual–inertial–kinematic and LiDAR–inertial–kinematic sub-systems—and introduces three key innovations: (i) foot-mounted pre-integration to model joint kinematics, (ii) superpixel-based depth consistency constraints to enhance visual robustness against motion blur and textureless regions, and (iii) joint optimization via point-to-plane residuals integrated within an error-state iterated Kalman filter (ESIKF) for efficient sliding-window estimation. Extensive experiments on multiple public and in-house long-duration datasets demonstrate that the proposed system significantly suppresses trajectory drift, achieving superior localization accuracy and environmental adaptability compared to state-of-the-art fused SLAM approaches.

Technology Category

Application Category

šŸ“ Abstract
Autonomous navigation for legged robots in complex and dynamic environments relies on robust simultaneous localization and mapping (SLAM) systems to accurately map surroundings and localize the robot, ensuring safe and efficient operation. While prior sensor fusion-based SLAM approaches have integrated various sensor modalities to improve their robustness, these algorithms are still susceptible to estimation drift in challenging environments due to their reliance on unsuitable fusion strategies. Therefore, we propose a robust LiDAR-visual-inertial-kinematic odometry system that integrates information from multiple sensors, such as a camera, LiDAR, inertial measurement unit (IMU), and joint encoders, for visual and LiDAR-based odometry estimation. Our system employs a fusion-based pose estimation approach that runs optimization-based visual-inertial-kinematic odometry (VIKO) and filter-based LiDAR-inertial-kinematic odometry (LIKO) based on measurement availability. In VIKO, we utilize the footpreintegration technique and robust LiDAR-visual depth consistency using superpixel clusters in a sliding window optimization. In LIKO, we incorporate foot kinematics and employ a point-toplane residual in an error-state iterative Kalman filter (ESIKF). Compared with other sensor fusion-based SLAM algorithms, our approach shows robust performance across public and longterm datasets.
Problem

Research questions and friction points this paper is trying to address.

Robust SLAM for legged robots in complex dynamic environments
Overcoming estimation drift with multi-sensor fusion strategies
Integrating LiDAR, visual, inertial, and kinematic data for odometry
Innovation

Methods, ideas, or system contributions that make the work stand out.

Tightly-coupled LiDAR-visual-inertial-kinematic odometry system
Alternating optimization-based visual-inertial-kinematic odometry
Filter-based LiDAR-inertial-kinematic odometry with ESIKF
šŸ”Ž Similar Papers
No similar papers found.