FreqPDE: Rethinking Positional Depth Embedding for Multi-View 3D Object Detection Transformers

📅 2025-10-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In multi-view 2D image-based 3D object detection, inaccurate depth estimation—manifesting as depth discontinuities at object boundaries, poor small-object discrimination, cross-view inconsistency, and scale sensitivity—remains a critical challenge. To address these issues, this paper proposes the Frequency-Aware Positional Depth Embedding (FAPDE) framework. Its key contributions are: (1) a Frequency-Aware Spatial Pyramid Encoder (FSPE) that jointly encodes multi-level high-frequency edge and low-frequency semantic features to enhance structural fidelity in depth prediction; (2) a Cross-View Scale-Invariant Depth Predictor (CSDP) that jointly optimizes depth consistency across views and robustness to object scale variations; and (3) a hybrid depth supervision scheme coupled with channel-attention-guided 2D–3D feature fusion. Evaluated on nuScenes, FAPDE achieves state-of-the-art performance, notably improving depth continuity and small-object recall while boosting overall 3D detection accuracy.

Technology Category

Application Category

📝 Abstract
Detecting 3D objects accurately from multi-view 2D images is a challenging yet essential task in the field of autonomous driving. Current methods resort to integrating depth prediction to recover the spatial information for object query decoding, which necessitates explicit supervision from LiDAR points during the training phase. However, the predicted depth quality is still unsatisfactory such as depth discontinuity of object boundaries and indistinction of small objects, which are mainly caused by the sparse supervision of projected points and the use of high-level image features for depth prediction. Besides, cross-view consistency and scale invariance are also overlooked in previous methods. In this paper, we introduce Frequency-aware Positional Depth Embedding (FreqPDE) to equip 2D image features with spatial information for 3D detection transformer decoder, which can be obtained through three main modules. Specifically, the Frequency-aware Spatial Pyramid Encoder (FSPE) constructs a feature pyramid by combining high-frequency edge clues and low-frequency semantics from different levels respectively. Then the Cross-view Scale-invariant Depth Predictor (CSDP) estimates the pixel-level depth distribution with cross-view and efficient channel attention mechanism. Finally, the Positional Depth Encoder (PDE) combines the 2D image features and 3D position embeddings to generate the 3D depth-aware features for query decoding. Additionally, hybrid depth supervision is adopted for complementary depth learning from both metric and distribution aspects. Extensive experiments conducted on the nuScenes dataset demonstrate the effectiveness and superiority of our proposed method.
Problem

Research questions and friction points this paper is trying to address.

Improving depth prediction quality for 3D object detection
Addressing depth discontinuity at object boundaries
Enhancing cross-view consistency and scale invariance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Frequency-aware Spatial Pyramid Encoder combines edge and semantic features
Cross-view Scale-invariant Depth Predictor estimates pixel-level depth
Positional Depth Encoder generates 3D depth-aware features for decoding
🔎 Similar Papers
No similar papers found.