Unleashing the Power of Discrete-Time State Representation: Ultrafast Target-based IMU-Camera Spatial-Temporal Calibration

📅 2025-09-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Visual-inertial spatial-temporal calibration requires precise estimation of both spatial and temporal offsets between IMUs and cameras. However, mainstream continuous-time approaches—e.g., B-spline-based formulations—suffer from high computational overhead, hindering large-scale deployment across millions of devices. This paper proposes the first discrete-time state representation method for ultra-high-speed calibration. Leveraging a target-driven optimization framework and joint spatiotemporal estimation, it overcomes the long-standing accuracy bottleneck of discretization in temporal calibration. A B-spline consistency constraint is further introduced to ensure temporal robustness. Experiments demonstrate that the method maintains millimeter-level spatial and millisecond-level temporal accuracy while reducing calibration time by one to two orders of magnitude. The open-source implementation establishes a new paradigm for industrial-scale calibration—efficient, reliable, and fully reproducible.

Technology Category

Application Category

📝 Abstract
Visual-inertial fusion is crucial for a large amount of intelligent and autonomous applications, such as robot navigation and augmented reality. To bootstrap and achieve optimal state estimation, the spatial-temporal displacements between IMU and cameras must be calibrated in advance. Most existing calibration methods adopt continuous-time state representation, more specifically the B-spline. Despite these methods achieve precise spatial-temporal calibration, they suffer from high computational cost caused by continuous-time state representation. To this end, we propose a novel and extremely efficient calibration method that unleashes the power of discrete-time state representation. Moreover, the weakness of discrete-time state representation in temporal calibration is tackled in this paper. With the increasing production of drones, cellphones and other visual-inertial platforms, if one million devices need calibration around the world, saving one minute for the calibration of each device means saving 2083 work days in total. To benefit both the research and industry communities, our code will be open-source.
Problem

Research questions and friction points this paper is trying to address.

Calibrating IMU-camera spatial-temporal displacements efficiently
Overcoming high computational cost of continuous-time methods
Addressing discrete-time representation's temporal calibration weakness
Innovation

Methods, ideas, or system contributions that make the work stand out.

Discrete-time state representation for calibration
Tackles temporal calibration weakness efficiently
Open-source code for research and industry
🔎 Similar Papers
No similar papers found.
Junlin Song
Junlin Song
University of Luxembourg
State EstimationSLAMCalibration
Antoine Richard
Antoine Richard
Nvidia
RoboticsComputer VisionControlMachine LearningReinforcement Learning
M
Miguel Olivares-Mendez
Space Robotics (SpaceR) Research Group, Int. Centre for Security, Reliability and Trust (SnT), University of Luxembourg, Luxembourg