🤖 AI Summary
This work addresses the critical limitation of inaccurate velocity estimation in vision-only temporal 3D object detection, which severely constrains NuScenes detection performance—particularly the NuScenes Detection Score (NDS). To tackle this, we propose a velocity-optimized enhanced Rotary Position Encoding (Rotary PE) that explicitly incorporates motion priors and strengthens cross-frame feature alignment and temporal motion representation. We further design an end-to-end trainable temporal fusion module, tightly integrated with the StreamPETR architecture built upon a ViT-L backbone. Crucially, our method improves velocity prediction accuracy without requiring additional sensors or post-processing. Evaluated on the NuScenes test set, it achieves a new state-of-the-art NDS of 70.86%, demonstrating that refined temporal position modeling is pivotal for accurate motion estimation in vision-only 3D detection.
📝 Abstract
This technical report introduces a targeted improvement to the StreamPETR framework, specifically aimed at enhancing velocity estimation, a critical factor influencing the overall NuScenes Detection Score. While StreamPETR exhibits strong 3D bounding box detection performance as reflected by its high mean Average Precision our analysis identified velocity estimation as a substantial bottleneck when evaluated on the NuScenes dataset. To overcome this limitation, we propose a customized positional embedding strategy tailored to enhance temporal modeling capabilities. Experimental evaluations conducted on the NuScenes test set demonstrate that our improved approach achieves a state-of-the-art NDS of 70.86% using the ViT-L backbone, setting a new benchmark for camera-only 3D object detection.