🤖 AI Summary
To address the challenges of modeling long-range contextual dependencies in 3D point clouds—stemming from sparsity, structural incompleteness, and limited semantic information—this paper proposes a unified 3D detection framework integrating multimodal features and graph-based reasoning. Our method introduces three key innovations: (1) an adaptive cross-modal Transformer that dynamically aligns and mutually enhances image and point cloud features; (2) a multi-scale graph attention mechanism that jointly captures local geometric structures and global semantic relationships; and (3) a cascaded graph decoder leveraging both spatial proximity and feature similarity to enable multi-stage detection refinement. Evaluated on SUN RGB-D and ScanNetV2, our approach achieves AP₂₅ scores of 70.6% and 75.1%, and AP₅₀ scores of 51.2% and 60.8%, respectively—substantially outperforming state-of-the-art methods.
📝 Abstract
Despite significant progress in 3D object detection, point clouds remain challenging due to sparse data, incomplete structures, and limited semantic information. Capturing contextual relationships between distant objects presents additional difficulties. To address these challenges, we propose GraphFusion3D, a unified framework combining multi-modal fusion with advanced feature learning. Our approach introduces the Adaptive Cross-Modal Transformer (ACMT), which adaptively integrates image features into point representations to enrich both geometric and semantic information. For proposal refinement, we introduce the Graph Reasoning Module (GRM), a novel mechanism that models neighborhood relationships to simultaneously capture local geometric structures and global semantic context. The module employs multi-scale graph attention to dynamically weight both spatial proximity and feature similarity between proposals. We further employ a cascade decoder that progressively refines detections through multi-stage predictions. Extensive experiments on SUN RGB-D (70.6% AP$_{25}$ and 51.2% AP$_{50}$) and ScanNetV2 (75.1% AP$_{25}$ and 60.8% AP$_{50}$) demonstrate a substantial performance improvement over existing approaches.