🤖 AI Summary
To address the limited robustness of existing 3D object detectors in autonomous driving—particularly for distant, small-scale, and occluded objects—this paper proposes DGFusion, a dual-guidance fusion framework. Unlike prevailing single-guidance multimodal approaches, DGFusion introduces a difficulty-aware instance matcher (DIPM) that establishes bidirectional guidance between point clouds and images (point cloud → image and image → point cloud), enabling modality-complementary feature interaction at the instance level. DIPM dynamically matches cross-modal features based on difficulty estimation, while a dedicated bidirectional fusion module enhances feature alignment accuracy. Evaluated on the nuScenes benchmark, DGFusion achieves significant improvements with a lightweight design: +1.0% mAP, +0.8% NDS, and +1.3% mean recall—especially for challenging instances. This work establishes a novel paradigm for safe and reliable multimodal 3D perception.
📝 Abstract
As a critical task in autonomous driving perception systems, 3D object detection is used to identify and track key objects, such as vehicles and pedestrians. However, detecting distant, small, or occluded objects (hard instances) remains a challenge, which directly compromises the safety of autonomous driving systems. We observe that existing multi-modal 3D object detection methods often follow a single-guided paradigm, failing to account for the differences in information density of hard instances between modalities. In this work, we propose DGFusion, based on the Dual-guided paradigm, which fully inherits the advantages of the Point-guide-Image paradigm and integrates the Image-guide-Point paradigm to address the limitations of the single paradigms. The core of DGFusion, the Difficulty-aware Instance Pair Matcher (DIPM), performs instance-level feature matching based on difficulty to generate easy and hard instance pairs, while the Dual-guided Modules exploit the advantages of both pair types to enable effective multi-modal feature fusion. Experimental results demonstrate that our DGFusion outperforms the baseline methods, with respective improvements of +1.0% mAP, +0.8% NDS, and +1.3% average recall on nuScenes. Extensive experiments demonstrate consistent robustness gains for hard instance detection across ego-distance, size, visibility, and small-scale training scenarios.