🤖 AI Summary
To address the insufficient robustness and flexibility of LiDAR SLAM in complex environments, high-speed motion, and multi-platform deployment, this paper introduces the first modular, no-code 3D LiDAR SLAM framework. Methodologically, it (1) adopts a timestamped pose graph as a unified map representation, enabling view-based posterior generation for customizable metric mapping; (2) tightly integrates linear and angular velocity estimation into the ICP optimizer, achieving robust mapping under aggressive motion without IMU assistance; and (3) employs a computation graph built from reusable modules, supporting zero-code pipeline configuration and adaptive parameter tuning. The framework is compatible with 16–128-line LiDARs across 2D/3D configurations and diverse platforms—including vehicular, handheld, aerial, and quadrupedal systems. Evaluated on 83 sequences (>250 km) with zero manual parameter tuning, it matches or surpasses state-of-the-art methods and successfully closes loops on several highly challenging sequences that cause divergence in mainstream systems. The code is open-sourced and deployed in real-world robotic systems.
📝 Abstract
LiDAR-based SLAM is a core technology for autonomous vehicles and robots. One key contribution of this work to 3D LiDAR SLAM and localization is a fierce defense of view-based maps (pose graphs with time-stamped sensor readings) as the fundamental representation of maps. As will be shown, they allow for the greatest flexibility, enabling the posterior generation of arbitrary metric maps optimized for particular tasks, e.g. obstacle avoidance, real-time localization. Moreover, this work introduces a new framework in which mapping pipelines can be defined without coding, defining the connections of a network of reusable blocks much like deep-learning networks are designed by connecting layers of standardized elements. We also introduce tightly-coupled estimation of linear and angular velocity vectors within the Iterative Closest Point (ICP)-like optimizer, leading to superior robustness against aggressive motion profiles without the need for an IMU. Extensive experimental validation reveals that the proposal compares well to, or improves, former state-of-the-art (SOTA) LiDAR odometry systems, while also successfully mapping some hard sequences where others diverge. A proposed self-adaptive configuration has been used, without parameter changes, for all 3D LiDAR datasets with sensors between 16 and 128 rings, and has been extensively tested on 83 sequences over more than 250~km of automotive, hand-held, airborne, and quadruped LiDAR datasets, both indoors and outdoors. The system flexibility is demonstrated with additional configurations for 2D LiDARs and for building 3D NDT-like maps. The framework is open-sourced online: https://github.com/MOLAorg/mola