๐ค AI Summary
To address low accuracy, severe accumulated drift, and poor sim-to-real transfer in legged-robot ego-odometry under GPS-denied and visually degraded conditions, this paper proposes an end-to-end state estimation method based on autonomous regression. The approach abandons conventional filtering frameworks and instead fuses temporal IMU and joint-sensor measurements to construct a deep learningโdriven nonlinear dynamics model. We introduce a novel two-stage autonomous regression training paradigm: first learning high-dynamic motion modeling in simulation, then achieving efficient domain adaptation with minimal real-world data. Evaluated on the Booster T1 humanoid robot, our method reduces absolute trajectory error by 57.2%, Umeyama alignment error by 59.2%, and relative pose error by 36.2% compared to the Legolas baseline. These improvements demonstrate substantial gains in robustness and generalization across challenging environments.
๐ Abstract
Accurate proprioceptive odometry is fundamental for legged robot navigation in GPS-denied and visually degraded environments where conventional visual odometry systems fail. Current approaches face critical limitations: analytical filtering methods suffer from modeling uncertainties and cumulative drift, hybrid learning-filtering approaches remain constrained by their analytical components, while pure learning-based methods struggle with simulation-to-reality transfer and demand extensive real-world data collection. This paper introduces AutoOdom, a novel autoregressive proprioceptive odometry system that overcomes these challenges through an innovative two-stage training paradigm. Stage 1 employs large-scale simulation data to learn complex nonlinear dynamics and rapidly changing contact states inherent in legged locomotion, while Stage 2 introduces an autoregressive enhancement mechanism using limited real-world data to effectively bridge the sim-to-real gap. The key innovation lies in our autoregressive training approach, where the model learns from its own predictions to develop resilience against sensor noise and improve robustness in highly dynamic environments. Comprehensive experimental validation on the Booster T1 humanoid robot demonstrates that AutoOdom significantly outperforms state-of-the-art methods across all evaluation metrics, achieving 57.2% improvement in absolute trajectory error, 59.2% improvement in Umeyama-aligned error, and 36.2% improvement in relative pose error compared to the Legolas baseline. Extensive ablation studies provide critical insights into sensor modality selection and temporal modeling, revealing counterintuitive findings about IMU acceleration data and validating our systematic design choices for robust proprioceptive odometry in challenging locomotion scenarios.