ADUGS-VINS: Generalized Visual-Inertial Odometry for Robust Navigation in Highly Dynamic and Complex Environments

📅 2024-11-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the degradation of visual-inertial odometry (VIO) accuracy in dynamic, complex environments—caused by moving objects and frequent occlusions—this paper proposes a robust VIO framework. Our method integrates promptable foundation models (e.g., SAM) into VIO for the first time, coupled with an enhanced SORT tracker to achieve precise association and motion decoupling of dynamic objects. We further design a dynamic feature suppression module and a motion consistency verification module, enabling generalized perception of unknown-category and partially occluded dynamic objects, as well as robust pose estimation, all within a tightly coupled VINS-Mono architecture. Evaluated on multiple public benchmarks and real-world highly dynamic scenes—including dense pedestrian-vehicle interactions—our approach reduces average position error by 32% while maintaining centimeter-level localization stability, significantly outperforming existing state-of-the-art methods.

Technology Category

Application Category

📝 Abstract
Visual-inertial odometry (VIO) is widely used in various fields, such as robots, drones, and autonomous vehicles. However, real-world scenes often feature dynamic objects, compromising the accuracy of VIO. The diversity and partial occlusion of these objects present a tough challenge for existing dynamic VIO methods. To tackle this challenge, we introduce ADUGS-VINS, which integrates an enhanced SORT algorithm along with a promptable foundation model into VIO, thereby improving pose estimation accuracy in environments with diverse dynamic objects and frequent occlusions. We evaluated our proposed method using multiple public datasets representing various scenes, as well as in a real-world scenario involving diverse dynamic objects. The experimental results demonstrate that our proposed method performs impressively in multiple scenarios, outperforming other state-of-the-art methods. This highlights its remarkable generalization and adaptability in diverse dynamic environments, showcasing its potential to handle various dynamic objects in practical applications.
Problem

Research questions and friction points this paper is trying to address.

Improves VIO accuracy in dynamic environments
Handles diverse and occluded dynamic objects
Enhances pose estimation in complex scenes
Innovation

Methods, ideas, or system contributions that make the work stand out.

Enhanced SORT algorithm integration
Promptable foundation model in VIO
Improved pose estimation in dynamic environments
🔎 Similar Papers
No similar papers found.
R
Rui Zhou
Electronic Information School, Wuhan University
Jingbin Liu
Jingbin Liu
Finnish Geospatial Research Institute
GeosciencePositioningMobile mapping
J
Junbin Xie
Electronic Information School, Wuhan University
J
Jianyu Zhang
Electronic Information School, Wuhan University
Y
Yingze Hu
Electronic Information School, Wuhan University
J
Jiele Zhao
Electronic Information School, Wuhan University