🤖 AI Summary
This work addresses the limited generalization of event-driven object detection models caused by variations in event camera sensor parameters, which undermines robustness across devices. For the first time, it systematically uncovers the relationship between intrinsic event camera parameters and detection performance. The authors propose a joint distribution training framework that integrates event data modeling with parameter sensitivity analysis to enable adaptive inference across diverse sensors. By explicitly accounting for sensor-specific characteristics during training, the method significantly improves detection accuracy on unseen event cameras, demonstrating the efficacy of parameter-aware learning for cross-sensor transfer. This advancement represents a critical step toward sensor-agnostic event-based perception.
📝 Abstract
Bio-inspired event cameras have recently attracted significant research due to their asynchronous and low-latency capabilities. These features provide a high dynamic range and significantly reduce motion blur. However, because of the novelty in the nature of their output signals, there is a gap in the variability of available data and a lack of extensive analysis of the parameters characterizing their signals. This paper addresses these issues by providing readers with an in-depth understanding of how intrinsic parameters affect the performance of a model trained on event data, specifically for object detection. We also use our findings to expand the capabilities of the downstream model towards sensor-agnostic robustness.