🤖 AI Summary
To address the pervasive z-axis position drift in state estimation for quadrupedal robots, this paper proposes two novel frameworks—E-InEKF and E-IS—built upon the Invariant Extended Kalman Filter (InEKF) and Invariant Smoother (IS). For the first time, LiDAR odometry and GPS measurements are rigorously modeled as group-affine observations and seamlessly integrated into the invariant filtering/smoothing framework. A parallel ICP thread is designed to enable efficient exosensory fusion. By jointly fusing IMU, kinematic, LiDAR, and GPS data on the KAIST HOUND2 platform, the method achieves up to 40% reduction in absolute trajectory error (ATE) outdoors and 28% indoors compared to LIO-SAM and FAST-LIO2; relative pose error (RPE) is further improved, all while maintaining real-time performance.
📝 Abstract
This letter introduces two multi-sensor state estimation frameworks for quadruped robots, built on the Invariant Extended Kalman Filter (InEKF) and Invariant Smoother (IS). The proposed methods, named E-InEKF and E-IS, fuse kinematics, IMU, LiDAR, and GPS data to mitigate position drift, particularly along the z-axis, a common issue in proprioceptive-based approaches. We derived observation models that satisfy group-affine properties to integrate LiDAR odometry and GPS into InEKF and IS. LiDAR odometry is incorporated using Iterative Closest Point (ICP) registration on a parallel thread, preserving the computational efficiency of proprioceptive-based state estimation. We evaluate E-InEKF and E-IS with and without exteroceptive sensors, benchmarking them against LiDAR-based odometry methods in indoor and outdoor experiments using the KAIST HOUND2 robot. Our methods achieve lower Relative Position Errors (RPE) and significantly reduce Absolute Trajectory Error (ATE), with improvements of up to 28% indoors and 40% outdoors compared to LIO-SAM and FAST-LIO2. Additionally, we compare E-InEKF and E-IS in terms of computational efficiency and accuracy.