🤖 AI Summary
Existing continuous-time motion estimation (CTME) methods face challenges in unifying rotational (SO(3)) and translational (SE(3)) state modeling and lack analytically differentiable, closed-form trajectory representations. To address this, we propose the Gaussian Process Trajectory Representation (GPTR) framework. Its core innovation is a novel closed-form Gaussian process model based on third-order stochastic jerk (jerk derivative), enabling unified, smooth, and fully analytic differentiation of both rotational and translational state derivatives. GPTR supports joint optimization over heterogeneous sensors—including LiDAR, cameras, IMUs, and UWB—and provides an open-source, lightweight, header-only C++ library with exemplars of fully analytic Jacobian computation. Evaluated across multiple CTME benchmarks, GPTR achieves high accuracy and computational efficiency. Its open implementation significantly lowers the barrier to continuous-time trajectory modeling, facilitating downstream applications such as batch optimization, extrinsic calibration, and motion planning.
📝 Abstract
Continuous-time trajectory representation has gained significant popularity in recent years, as it offers an elegant formulation that allows the fusion of a larger number of sensors and sensing modalities, overcoming limitations of traditional discrete-time frameworks. To bolster the adoption of the continuous-time paradigm, we propose a so-called Gaussian Process Trajectory Representation (GPTR) framework for continuous-time motion estimation (CTME) tasks. Our approach stands out by employing a third-order random jerk model, featuring closed-form expressions for both rotational and translational state derivatives. This model provides smooth, continuous trajectory representations that are crucial for precise estimation of complex motion. To support the wider robotics and computer vision communities, we have made the source code for GPTR available as a light-weight header-only library. This format was chosen for its ease of integration, allowing developers to incorporate GPTR into existing systems without needing extensive code modifications. Moreover, we also provide a set of optimization examples with LiDAR, camera, IMU, UWB factors, and closed-form analytical Jacobians under the proposed GP framework. Our experiments demonstrate the efficacy and efficiency of GP-based trajectory representation in various motion estimation tasks, and the examples can serve as the prototype to help researchers quickly develop future applications such as batch optimization, calibration, sensor fusion, trajectory planning, etc., with continuous-time trajectory representation. Our project is accessible at https://github.com/brytsknguyen/gptr .