🤖 AI Summary
This work addresses the performance degradation in 3D object detection for autonomous driving caused by temporal asynchrony among multimodal sensors, particularly in dynamic scenes. To this end, the authors propose AsyncBEV, a novel module that introduces scene flow concepts into asynchronous multimodal bird’s-eye-view (BEV) perception. By estimating 2D flow fields between cross-modal BEV features and warping them according to known time offsets, AsyncBEV achieves lightweight, plug-and-play spatial alignment. The method is compatible with both token-based and grid-based BEV architectures, such as CMT and UniBEV. Under a maximum time offset of 0.5 seconds, AsyncBEV improves the NuScenes Detection Score (NDS) for dynamic objects by 16.6% and 11.9%, respectively, substantially outperforming baseline approaches based on ego-motion compensation.
📝 Abstract
In autonomous driving, multi-modal perception tasks like 3D object detection typically rely on well-synchronized sensors, both at training and inference. However, despite the use of hardware- or software-based synchronization algorithms, perfect synchrony is rarely guaranteed: Sensors may operate at different frequencies, and real-world factors such as network latency, hardware failures, or processing bottlenecks often introduce time offsets between sensors. Such asynchrony degrades perception performance, especially for dynamic objects. To address this challenge, we propose AsyncBEV, a trainable lightweight and generic module to improve the robustness of 3D Birds'Eye View (BEV) object detection models against sensor asynchrony. Inspired by scene flow estimation, AsyncBEV first estimates the 2D flow from the BEV features of two different sensor modalities, taking into account the known time offset between these sensor measurements. The predicted feature flow is then used to warp and spatially align the feature maps, which we show can easily be integrated into different current BEV detector architectures (e.g., BEV grid-based and token-based). Extensive experiments demonstrate AsyncBEV improves robustness against both small and large asynchrony between LiDAR or camera sensors in both the token-based CMT and grid-based UniBEV, especially for dynamic objects. We significantly outperform the ego motion compensated CMT and UniBEV baselines, notably by $16.6$ % and $11.9$ % NDS on dynamic objects in the worst-case scenario of a $0.5 s$ time offset. Code will be released upon acceptance.