MV2DFusion: Leveraging Modality-Specific Object Semantics for Multi-Modal 3D Detection

📅 2024-08-12
🏛️ arXiv.org
📈 Citations: 6
Influential: 1
📄 PDF
🤖 AI Summary
To address the limited robustness of single-modality 3D object detection in autonomous driving, this paper proposes MV2DFusion, a bias-free sparse fusion framework for camera and LiDAR modalities. Its core innovation is a novel modality-specific query generation mechanism: the image branch models texture semantics, while the point cloud branch captures geometric structure; these are deeply fused via Transformer-driven cross-modal semantic alignment and object-level sparse interaction. By decoupling modality-specific characteristics, MV2DFusion supports plug-and-play integration of arbitrary single-modality detectors, achieving both superior performance and architectural flexibility. Extensive experiments demonstrate state-of-the-art results on nuScenes and Argoverse2 benchmarks, with particularly significant improvements in long-range detection—surpassing prior methods in both mAP and NDS.

Technology Category

Application Category

📝 Abstract
The rise of autonomous vehicles has significantly increased the demand for robust 3D object detection systems. While cameras and LiDAR sensors each offer unique advantages--cameras provide rich texture information and LiDAR offers precise 3D spatial data--relying on a single modality often leads to performance limitations. This paper introduces MV2DFusion, a multi-modal detection framework that integrates the strengths of both worlds through an advanced query-based fusion mechanism. By introducing an image query generator to align with image-specific attributes and a point cloud query generator, MV2DFusion effectively combines modality-specific object semantics without biasing toward one single modality. Then the sparse fusion process can be accomplished based on the valuable object semantics, ensuring efficient and accurate object detection across various scenarios. Our framework's flexibility allows it to integrate with any image and point cloud-based detectors, showcasing its adaptability and potential for future advancements. Extensive evaluations on the nuScenes and Argoverse2 datasets demonstrate that MV2DFusion achieves state-of-the-art performance, particularly excelling in long-range detection scenarios.
Problem

Research questions and friction points this paper is trying to address.

Integrates camera and LiDAR for robust 3D detection
Balances modality-specific semantics without bias
Improves long-range detection accuracy in autonomous vehicles
Innovation

Methods, ideas, or system contributions that make the work stand out.

Query-based fusion mechanism for multi-modal detection
Image and point cloud query generators alignment
Sparse fusion using valuable object semantics
🔎 Similar Papers
No similar papers found.