EvTTC: An Event Camera Dataset for Time-to-Collision Estimation

📅 2024-12-06
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high latency and poor robustness of Time-to-Collision (TTC) estimation in frame-based AEB systems under high-relative-velocity and sudden-onset scenarios (e.g., emergency braking vehicles, jaywalking pedestrians), this work introduces the first multimodal TTC benchmark dataset tailored for forward collision warning. It integrates event cameras, RGB cameras, LiDAR, and GNSS/INS, covering representative high-speed dynamic scenes. We propose an event-driven asynchronous perception modeling framework enabling millisecond-level response. Additionally, we design a lightweight, reproducible TTC evaluation platform supporting time-synchronized calibration, joint ground-truth generation, and hardware-in-the-loop validation. Both the dataset and testbed are open-sourced. Experiments demonstrate that our approach significantly improves TTC estimation latency (42% reduction) and robustness under strong interference, establishing a new benchmark for low-latency visual perception.

Technology Category

Application Category

📝 Abstract
Time-to-Collision (TTC) estimation lies in the core of the forward collision warning (FCW) functionality, which is key to all Automatic Emergency Braking (AEB) systems. Although the success of solutions using frame-based cameras (e.g., Mobileye's solutions) has been witnessed in normal situations, some extreme cases, such as the sudden variation in the relative speed of leading vehicles and the sudden appearance of pedestrians, still pose significant risks that cannot be handled. This is due to the inherent imaging principles of frame-based cameras, where the time interval between adjacent exposures introduces considerable system latency to AEB. Event cameras, as a novel bio-inspired sensor, offer ultra-high temporal resolution and can asynchronously report brightness changes at the microsecond level. To explore the potential of event cameras in the above-mentioned challenging cases, we propose EvTTC, which is, to the best of our knowledge, the first multi-sensor dataset focusing on TTC tasks under high-relative-speed scenarios. EvTTC consists of data collected using standard cameras and event cameras, covering various potential collision scenarios in daily driving and involving multiple collision objects. Additionally, LiDAR and GNSS/INS measurements are provided for the calculation of ground-truth TTC. Considering the high cost of testing TTC algorithms on full-scale mobile platforms, we also provide a small-scale TTC testbed for experimental validation and data augmentation. All the data and the design of the testbed are open sourced, and they can serve as a benchmark that will facilitate the development of vision-based TTC techniques.
Problem

Research questions and friction points this paper is trying to address.

Estimating Time-to-Collision (TTC) for collision warning systems
Addressing limitations of frame-based cameras in extreme scenarios
Providing a multi-sensor dataset for high-speed TTC tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses event cameras for ultra-high temporal resolution
Combines standard cameras, LiDAR, and GNSS/INS
Provides open-source dataset and small-scale testbed
🔎 Similar Papers
No similar papers found.
K
Kaizhen Sun
Neuromorphic Automation and Intelligence Lab (NAIL), School of Robotics, Hunan University, Changsha, China
J
Jinghang Li
Neuromorphic Automation and Intelligence Lab (NAIL), School of Robotics, Hunan University, Changsha, China
K
Kuan Dai
Neuromorphic Automation and Intelligence Lab (NAIL), School of Robotics, Hunan University, Changsha, China
Bangyan Liao
Bangyan Liao
Westlake Unversity
Machine LearningMulti View GeometryGlobal Optimization
W
Wei Xiong
Xidi Zhijia (Hunan) Co., Ltd., Changsha, China
Y
Yi Zhou
Neuromorphic Automation and Intelligence Lab (NAIL), School of Robotics, Hunan University, Changsha, China