RayD3D: Distilling Depth Knowledge Along the Ray for Robust Multi-View 3D Object Detection

📅 2026-03-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limited robustness of multi-view 3D object detection in real-world scenarios, primarily caused by inaccurate depth prediction and interference from depth-irrelevant cues—such as LiDAR point density—in existing cross-modal distillation methods. Drawing upon imaging geometry principles, the authors propose a novel ray-constrained distillation paradigm that performs depth knowledge transfer along rays extending from the camera to the true object locations. Specifically, they introduce two modules: Ray Contrastive Distillation (RCD) and Ray Weighted Distillation (RWD), which effectively suppress irrelevant distractions. Integrated within a bird’s-eye-view (BEV) framework, this approach enables model-agnostic distillation without additional inference overhead. Experiments on NuScenes and RoboBEV benchmarks, including perturbed settings, demonstrate consistent and significant robustness improvements for BEVDet, BEVDepth4D, and BEVFormer, outperforming current state-of-the-art methods.

Technology Category

Application Category

📝 Abstract
Multi-view 3D detection with bird's eye view (BEV) is crucial for autonomous driving and robotics, but its robustness in real-world is limited as it struggles to predict accurate depth values. A mainstream solution, cross-modal distillation, transfers depth information from LiDAR to camera models but also unintentionally transfers depth-irrelevant information (e.g. LiDAR density). To mitigate this issue, we propose RayD3D, which transfers crucial depth knowledge along the ray: a line projecting from the camera to true location of an object. It is based on the fundamental imaging principle that predicted location of this object can only vary along this ray, which is finally determined by predicted depth value. Therefore, distilling along the ray enables more effective depth information transfer. More specifically, we design two ray-based distillation modules. Ray-based Contrastive Distillation (RCD) incorporates contrastive learning into distillation by sampling along the ray to learn how LiDAR accurately locates objects. Ray-based Weighted Distillation (RWD) adaptively adjusts distillation weight based on the ray to minimize the interference of depth-irrelevant information in LiDAR. For validation, we widely apply RayD3D into three representative types of BEV-based models, including BEVDet, BEVDepth4D, and BEVFormer. Our method is trained on clean NuScenes, and tested on both clean NuScenes and RoboBEV with a variety types of data corruptions. Our method significantly improves the robustness of all the three base models in all scenarios without increasing inference costs, and achieves the best when compared to recently released multi-view and distillation models.
Problem

Research questions and friction points this paper is trying to address.

multi-view 3D object detection
depth prediction
cross-modal distillation
robustness
LiDAR-camera fusion
Innovation

Methods, ideas, or system contributions that make the work stand out.

ray-based distillation
depth knowledge transfer
multi-view 3D object detection
contrastive learning
robustness
🔎 Similar Papers
No similar papers found.