🤖 AI Summary
Stereo 3D detection methods suffer from high computational cost and significant inference latency—achieving roughly twice the accuracy of monocular approaches but at only half their speed. To address this, we propose StereoDETR, an efficient framework that unifies a monocular DETR backbone with a stereo branch. Its core innovation is a differentiable depth sampling strategy that tightly couples the two branches, complemented by low-cost multi-scale disparity feature extraction and constraint-aware sampling point supervision—requiring no additional annotations and mitigating occlusion effects. Built upon a Transformer architecture, StereoDETR is the first stereo 3D detector to achieve real-time inference on the KITTI benchmark. It surpasses state-of-the-art monocular methods in pedestrian and cyclist detection accuracy while doubling inference speed—delivering simultaneous breakthroughs in both accuracy and efficiency.
📝 Abstract
Compared to monocular 3D object detection, stereo-based 3D methods offer significantly higher accuracy but still suffer from high computational overhead and latency. The state-of-the-art stereo 3D detection method achieves twice the accuracy of monocular approaches, yet its inference speed is only half as fast. In this paper, we propose StereoDETR, an efficient stereo 3D object detection framework based on DETR. StereoDETR consists of two branches: a monocular DETR branch and a stereo branch. The DETR branch is built upon 2D DETR with additional channels for predicting object scale, orientation, and sampling points. The stereo branch leverages low-cost multi-scale disparity features to predict object-level depth maps. These two branches are coupled solely through a differentiable depth sampling strategy. To handle occlusion, we introduce a constrained supervision strategy for sampling points without requiring extra annotations. StereoDETR achieves real-time inference and is the first stereo-based method to surpass monocular approaches in speed. It also achieves competitive accuracy on the public KITTI benchmark, setting new state-of-the-art results on pedestrian and cyclist subsets. The code is available at https://github.com/shiyi-mu/StereoDETR-OPEN.