🤖 AI Summary
High-speed agile robots (e.g., acrobatic UAVs) suffer from degraded motion estimation in high-dynamic, textureless environments due to sensor latency, motion blur, and image distortion—especially when relying solely on conventional cameras or IMUs.
Method: We propose an IMU-free, feature-matching-free heterogeneous sensing fusion framework that directly couples continuous-time event streams from an event camera with Doppler velocity measurements from mmWave radar. A continuous-time state-space model is formulated, and asynchronous temporal fusion is performed via a fixed-lag smoother operating at millisecond-level latency.
Contribution/Results: The method significantly improves robustness and real-time performance of velocity estimation under extreme maneuvering and textureless conditions. Evaluated on a custom high-dynamic dataset, it achieves sub-meter-per-second velocity accuracy and supports low-power edge deployment.
📝 Abstract
Achieving reliable ego motion estimation for agile robots, e.g., aerobatic aircraft, remains challenging because most robot sensors fail to respond timely and clearly to highly dynamic robot motions, often resulting in measurement blurring, distortion, and delays. In this paper, we propose an IMU-free and feature-association-free framework to achieve aggressive ego-motion velocity estimation of a robot platform in highly dynamic scenarios by combining two types of exteroceptive sensors, an event camera and a millimeter wave radar, First, we used instantaneous raw events and Doppler measurements to derive rotational and translational velocities directly. Without a sophisticated association process between measurement frames, the proposed method is more robust in texture-less and structureless environments and is more computationally efficient for edge computing devices. Then, in the back-end, we propose a continuous-time state-space model to fuse the hybrid time-based and event-based measurements to estimate the ego-motion velocity in a fixed-lagged smoother fashion. In the end, we validate our velometer framework extensively in self-collected experiment datasets. The results indicate that our IMU-free and association-free ego motion estimation framework can achieve reliable and efficient velocity output in challenging environments. The source code, illustrative video and dataset are available at https://github.com/ZzhYgwh/TwistEstimator.