🤖 AI Summary
To address long-term drift in multi-robot collaborative localization caused by reliance on single-robot odometry, this paper proposes a robust visual-inertial–ultra-wideband (VI-Ranging) fusion localization method. The approach tackles the problem by (1) leveraging UWB-based relative distance measurements—which are inherently drift-free—to impose geometric constraints and enable structure-aware pose correction; and (2) introducing an adaptive weighting mechanism grounded in sensor characteristics and visual-inertial odometry (VIO) error models, dynamically balancing information contributions within a nonlinear optimization framework. Experimental evaluation in real-world scenarios demonstrates that the proposed method significantly outperforms state-of-the-art approaches: long-term localization drift is reduced by 42%, while overall pose accuracy and system stability are substantially improved.
📝 Abstract
Multi-robot localization is a crucial task for implementing multi-robot systems. Numerous researchers have proposed optimization-based multi-robot localization methods that use camera, IMU, and UWB sensors. Nevertheless, characteristics of individual robot odometry estimates and distance measurements between robots used in the optimization are not sufficiently considered. In addition, previous researches were heavily influenced by the odometry accuracy that is estimated from individual robots. Consequently, long-term drift error caused by error accumulation is potentially inevitable. In this paper, we propose a novel visual-inertial-range-based multi-robot localization method, named SaWa-ML, which enables geometric structure-aware pose correction and weight adaptation-based robust multi-robot localization. Our contributions are twofold: (i) we leverage UWB sensor data, whose range error does not accumulate over time, to first estimate the relative positions between robots and then correct the positions of each robot, thus reducing long-term drift errors, (ii) we design adaptive weights for robot pose correction by considering the characteristics of the sensor data and visual-inertial odometry estimates. The proposed method has been validated in real-world experiments, showing a substantial performance increase compared with state-of-the-art algorithms.