🤖 AI Summary
To address the limited robustness and sensor-specific modeling dependency of LiDAR-inertial odometry under diverse sensor configurations and operational scenarios, this paper proposes a generic fusion framework that requires no prior sensor modeling. Methodologically, it employs a simplified IMU motion model for inertial integration—eliminating both feature extraction and preintegration—and introduces a direct scan-to-map LiDAR registration with a novel regularization mechanism to improve convergence stability. The key contributions are: (1) a unified configuration enabling cross-platform deployment (e.g., urban driving, natural environments) and cross-sensor compatibility (various LiDAR/IMU models); (2) experimental validation on multiple real-world robotic platforms demonstrating high accuracy, strong robustness, and real-time performance; and (3) open-sourced implementation.
📝 Abstract
Accurate odometry is a critical component in a robotic navigation stack, and subsequent modules such as planning and control often rely on an estimate of the robot's motion. Sensor-based odometry approaches should be robust across sensor types and deployable in different target domains, from solid-state LiDARs mounted on cars in urban-driving scenarios to spinning LiDARs on handheld packages used in unstructured natural environments. In this paper, we propose a robust LiDAR-inertial odometry system that does not rely on sensor-specific modeling. Sensor fusion techniques for LiDAR and inertial measurement unit (IMU) data typically integrate IMU data iteratively in a Kalman filter or use pre-integration in a factor graph framework, combined with LiDAR scan matching often exploiting some form of feature extraction. We propose an alternative strategy that only requires a simplified motion model for IMU integration and directly registers LiDAR scans in a scan-to-map approach. Our approach allows us to impose a novel regularization on the LiDAR registration, improving the overall odometry performance. We detail extensive experiments on a number of datasets covering a wide array of commonly used robotic sensors and platforms. We show that our approach works with the exact same configuration in all these scenarios, demonstrating its robustness. We have open-sourced our implementation so that the community can build further on our work and use it in their navigation stacks.