🤖 AI Summary
In GPS-denied indoor environments, visual-inertial odometry (VIO) suffers from drift, compromising trajectory reliability for autonomous flight. To address this, we propose a perception-aware trajectory generation framework. Our method tightly integrates scene coordinate regression (SCR) with VIO, leveraging evidential deep learning to quantify SCR uncertainty; this uncertainty metric actively guides a receding-horizon trajectory optimizer toward high-confidence visual regions, establishing a closed-loop perception–control co-design. The framework combines fixed-lag smoothing with multi-rate sensor fusion—low-frequency SCR outputs fused with high-frequency IMU measurements—to ensure both real-time performance and accuracy. Simulation results show 54% and 40% reductions in translational and rotational trajectory errors, respectively, compared to a heading-constrained baseline. Hardware-in-the-loop experiments further validate the framework’s feasibility and real-time capability.
📝 Abstract
Autonomous flight in GPS denied indoor spaces requires trajectories that keep visual localization error tightly bounded across varied missions. Whereas visual inertial odometry (VIO) accumulates drift over time, scene coordinate regression (SCR) yields drift-free, high accuracy absolute pose estimation. We present a perception-aware framework that couples an evidential learning-based SCR pose estimator with a receding horizon trajectory optimizer. The optimizer steers the onboard camera toward pixels whose uncertainty predicts reliable scene coordinates, while a fixed-lag smoother fuses the low rate SCR stream with high rate IMU data to close the perception control loop in real time. In simulation, our planner reduces translation (rotation) mean error by 54% / 15% (40% / 31%) relative to yaw fixed and forward-looking baselines, respectively. Moreover, hardware in the loop experiment validates the feasibility of our proposed framework.