🤖 AI Summary
Temporal misalignment among multi-event cameras arises from trigger and transmission delays; hardware synchronization is constrained by circuit dependencies and device incompatibility (e.g., CeleX5 lacks support). Method: We propose a hardware-agnostic software synchronization approach that leverages the onset-time disparity of event density distributions across cameras. Our method jointly models event density, optimizes distribution similarity, and dynamically adjusts timestamps for high-precision temporal alignment. Contribution/Results: This framework overcomes hardware limitations, significantly enhancing system compatibility and deployment flexibility. Extensive experiments across diverse scenes and camera models (e.g., DAVIS346, CeleX6) demonstrate consistently sub-10 ms synchronization error, enabling accurate fusion of multi-view event data for downstream tasks such as 3D reconstruction and motion estimation.
📝 Abstract
Event cameras are a novel type of sensor designed for capturing the dynamic changes of a scene. Due to factors such as trigger and transmission delays, a time offset exists in the data collected by multiple event cameras, leading to inaccurate information fusion. Thus, the collected data needs to be synchronized to overcome any potential time offset issue. Hardware synchronization methods require additional circuits, while certain models of event cameras (e.g., CeleX5) do not support hardware synchronization. Therefore, this paper proposes a hardware-free event camera synchronization method. This method determines differences between start times by minimizing the dissimilarity of the event density distributions of different event cameras and synchronizes the data by adjusting timestamps. The experiments demonstrate that the method's synchronization error is less than 10ms under various senses with multiple models of event cameras.