HeCoFuse: Cross-Modal Complementary V2X Cooperative Perception with Heterogeneous Sensors

📅 2025-07-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenges of cross-modal feature fusion and low perception robustness arising from heterogeneous sensor configurations (e.g., camera-only, LiDAR-only, or hybrid setups) in real-world V2X cooperative perception, this paper proposes HeCoFuse—a unified framework. Its key contributions are: (1) a hierarchical channel-spatial dual-attention fusion mechanism for adaptive cross-modal feature weighting; (2) a dynamic spatial resolution adjustment module to accommodate varying input scales across sensors; and (3) a modality-aware collaborative learning strategy that dynamically switches based on available modalities to enhance generalization under heterogeneity. Evaluated on nine heterogeneous configurations of the TUMTraf-V2X benchmark, HeCoFuse achieves 3D mAP ranging from 21.74% to 43.38%, significantly outperforming all baselines. It ranked first in the CVPR 2025 DriveX Challenge, establishing the new state-of-the-art for V2X cooperative 3D perception.

Technology Category

Application Category

📝 Abstract
Real-world Vehicle-to-Everything (V2X) cooperative perception systems often operate under heterogeneous sensor configurations due to cost constraints and deployment variability across vehicles and infrastructure. This heterogeneity poses significant challenges for feature fusion and perception reliability. To address these issues, we propose HeCoFuse, a unified framework designed for cooperative perception across mixed sensor setups where nodes may carry Cameras (C), LiDARs (L), or both. By introducing a hierarchical fusion mechanism that adaptively weights features through a combination of channel-wise and spatial attention, HeCoFuse can tackle critical challenges such as cross-modality feature misalignment and imbalanced representation quality. In addition, an adaptive spatial resolution adjustment module is employed to balance computational cost and fusion effectiveness. To enhance robustness across different configurations, we further implement a cooperative learning strategy that dynamically adjusts fusion type based on available modalities. Experiments on the real-world TUMTraf-V2X dataset demonstrate that HeCoFuse achieves 43.22% 3D mAP under the full sensor configuration (LC+LC), outperforming the CoopDet3D baseline by 1.17%, and reaches an even higher 43.38% 3D mAP in the L+LC scenario, while maintaining 3D mAP in the range of 21.74% to 43.38% across nine heterogeneous sensor configurations. These results, validated by our first-place finish in the CVPR 2025 DriveX challenge, establish HeCoFuse as the current state-of-the-art on TUM-Traf V2X dataset while demonstrating robust performance across diverse sensor deployments.
Problem

Research questions and friction points this paper is trying to address.

Addresses feature fusion in V2X with heterogeneous sensors
Solves cross-modality misalignment and representation imbalance
Balances computational cost and fusion effectiveness adaptively
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hierarchical fusion with adaptive attention weights
Adaptive spatial resolution adjustment module
Dynamic cooperative learning for fusion types
🔎 Similar Papers
No similar papers found.