š¤ AI Summary
To address pose estimation drift in SLAM for quadrupedal robots operating in complex dynamic environments, this paper proposes a tightly coupled LiDARāvisualāinertialākinematic multi-sensor odometry system. Methodologically, it establishes a dual-branch alternating optimization frameworkācomprising visualāinertialākinematic and LiDARāinertialākinematic sub-systemsāand introduces three key innovations: (i) foot-mounted pre-integration to model joint kinematics, (ii) superpixel-based depth consistency constraints to enhance visual robustness against motion blur and textureless regions, and (iii) joint optimization via point-to-plane residuals integrated within an error-state iterated Kalman filter (ESIKF) for efficient sliding-window estimation. Extensive experiments on multiple public and in-house long-duration datasets demonstrate that the proposed system significantly suppresses trajectory drift, achieving superior localization accuracy and environmental adaptability compared to state-of-the-art fused SLAM approaches.
š Abstract
Autonomous navigation for legged robots in complex and dynamic environments relies on robust simultaneous localization and mapping (SLAM) systems to accurately map surroundings and localize the robot, ensuring safe and efficient operation. While prior sensor fusion-based SLAM approaches have integrated various sensor modalities to improve their robustness, these algorithms are still susceptible to estimation drift in challenging environments due to their reliance on unsuitable fusion strategies. Therefore, we propose a robust LiDAR-visual-inertial-kinematic odometry system that integrates information from multiple sensors, such as a camera, LiDAR, inertial measurement unit (IMU), and joint encoders, for visual and LiDAR-based odometry estimation. Our system employs a fusion-based pose estimation approach that runs optimization-based visual-inertial-kinematic odometry (VIKO) and filter-based LiDAR-inertial-kinematic odometry (LIKO) based on measurement availability. In VIKO, we utilize the footpreintegration technique and robust LiDAR-visual depth consistency using superpixel clusters in a sliding window optimization. In LIKO, we incorporate foot kinematics and employ a point-toplane residual in an error-state iterative Kalman filter (ESIKF). Compared with other sensor fusion-based SLAM algorithms, our approach shows robust performance across public and longterm datasets.