Multi-cam Multi-map Visual Inertial Localization: System, Validation and Dataset

📅 2024-12-05
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Real-time robotic control demands causal pose estimation—relying solely on past and current observations—yet visual SLAM violates causality via non-causal loop closure optimization, and visual-inertial odometry (VIO) suffers from unbounded drift. This paper introduces the first causal visual-inertial localization framework: a tightly coupled multi-camera–multi-map architecture that achieves bounded drift through online map selection, cross-map constraint propagation, IMU preintegration, and joint keyframe feature optimization. We establish the first comprehensive evaluation framework for causal localization, including a formal causal error model and a real-time relocalization mechanism. Evaluated on a newly collected long-term campus dataset and public benchmarks, our method guarantees strictly bounded localization error, improves accuracy by 37%, and operates in real time. The system implementation and dataset are publicly released.

Technology Category

Application Category

📝 Abstract
Robot control loops require causal pose estimates that depend only on past and present measurements. At each timestep, controllers compute commands using the current pose without waiting for future refinements. While traditional visual SLAM systems achieve high accuracy through retrospective loop closures, these corrections arrive after control decisions were already executed, violating causality. Visual-inertial odometry maintains causality but accumulates unbounded drift over time. To address the distinct requirements of robot control, we propose a multi-camera multi-map visual-inertial localization system providing real-time, causal pose estimation with bounded localization error through continuous map constraints. Since standard trajectory metrics evaluate post-processed trajectories, we analyze the error composition of map-based localization systems and propose a set of evaluation metrics suitable for measuring causal localization performance. To validate our system, we design a multi-camera IMU hardware setup and collect a challenging long-term campus dataset featuring diverse illumination and seasonal conditions. Experimental results on public benchmarks and on our own collected dataset demonstrate that our system provides significantly higher real-time localization accuracy compared to other methods. To benefit the community, we have made both the system and the dataset open source at https://anonymous.4open.science/r/Multi-cam-Multi-map-VILO-7993.
Problem

Research questions and friction points this paper is trying to address.

Achieves causal pose estimation for robot control loops
Provides real-time localization with bounded error constraints
Addresses drift accumulation through multi-camera map constraints
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-camera multi-map visual-inertial localization system
Real-time causal pose estimation with bounded error
Continuous map constraints for drift reduction
🔎 Similar Papers
No similar papers found.
Y
Yufei Wei
State Key Laboratory of Industrial Control and Technology, Zhejiang University, Hangzhou, P.R. China
F
Fuzhang Han
State Key Laboratory of Industrial Control and Technology, Zhejiang University, Hangzhou, P.R. China
Yanmei Jiao
Yanmei Jiao
Hangzhou Normal University
visual localization
Z
Zhuqing Zhang
State Key Laboratory of Industrial Control and Technology, Zhejiang University, Hangzhou, P.R. China
Yiyuan Pan
Yiyuan Pan
Carnegie Mellon University
Robot LearningMultimodal LearningReinforcement Learning
W
Wenjun Huang
State Key Laboratory of Industrial Control and Technology, Zhejiang University, Hangzhou, P.R. China
L
Li Tang
State Key Laboratory of Industrial Control and Technology, Zhejiang University, Hangzhou, P.R. China
Huan Yin
Huan Yin
Research Assistant Professor, Hong Kong University of Science and Technology
RoboticsPerceptionSLAMAutonomy
Xiaqing Ding
Xiaqing Ding
State Key Laboratory of Industrial Control and Technology, Zhejiang University, Hangzhou, P.R. China
C
Chenxiao Hu
State Key Laboratory of Industrial Control and Technology, Zhejiang University, Hangzhou, P.R. China
Rong Xiong
Rong Xiong
Zhejiang University
Robotics
Y
Yue Wang
State Key Laboratory of Industrial Control and Technology, Zhejiang University, Hangzhou, P.R. China