KISS-IMU: Self-supervised Inertial Odometry with Motion-balanced Learning and Uncertainty-aware Inference

📅 2026-03-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes a self-supervised inertial odometry framework that eliminates the reliance on ground-truth pose labels, thereby enhancing generalization in unknown environments. The method trains solely on IMU data, leveraging lightweight LiDAR point cloud registration via ICP and pose graph optimization to generate pseudo-supervisory signals. To improve robustness and accuracy, it incorporates a motion-aware balanced learning strategy and an uncertainty-driven adaptive inference mechanism. Extensive experiments across diverse real-world platforms—including quadrupedal robots—demonstrate the effectiveness of the approach. Notably, the system achieves strong generalization performance in inertial odometry without requiring ground-truth annotations or multimodal joint learning, offering a practical solution for deployment in unstructured or GPS-denied settings.

Technology Category

Application Category

📝 Abstract
Inertial measurement units (IMUs), which provide high-frequency linear acceleration and angular velocity measurements, serve as fundamental sensing modalities in robotic systems. Recent advances in deep neural networks have led to remarkable progress in inertial odometry. However, the heavy reliance on ground truth data during training fundamentally limits scalability and generalization to unseen and diverse environments. We propose KISS-IMU, a novel self-supervised inertial odometry framework that eliminates ground truth dependency by leveraging simple LiDAR-based ICP registration and pose graph optimization as a supervisory signal. Our approach embodies two key principles: keeping the IMU stable through motion-aware balanced training and keeping the IMU strong through uncertainty-driven adaptive weighting during inference. To evaluate performance across diverse motion patterns and scenarios, we conducted comprehensive experiments on various real-world platforms, including quadruped robots. Importantly, we train only the IMU network in a self-supervised manner, with LiDAR serving solely as a lightweight supervisory signal rather than requiring additional learnable processes. This design enables the framework to ensure robustness without relying on joint multi-modal learning or ground truth supervision. The supplementary materials are available at https://sparolab.github.io/research/kiss_imu.
Problem

Research questions and friction points this paper is trying to address.

inertial odometry
self-supervised learning
ground truth dependency
generalization
scalability
Innovation

Methods, ideas, or system contributions that make the work stand out.

self-supervised learning
inertial odometry
motion-balanced training
uncertainty-aware inference
LiDAR supervision
🔎 Similar Papers
No similar papers found.