Uncertainty Quantification for Collaborative Object Detection Under Adversarial Attacks

📅 2025-02-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Collaborative object detection (COD) in vehicle-infrastructure cooperative systems suffers from insufficient robustness and inadequate uncertainty quantification under adversarial attacks. Method: This paper proposes TUQCP, a Trustworthy Uncertainty Quantification framework for COD, which—novelty—introduces conformal prediction calibration into collaborative perception. TUQCP integrates adversarial training with a learnable uncertainty estimation module, supporting both early- and mid-fusion collaboration as well as single-agent models, and enabling reliable confidence calibration even under unknown attack patterns. Contribution/Results: Evaluated on the V2X-Sim simulation dataset, TUQCP improves adversarial robustness by boosting detection accuracy by 80.41% under attacks. It significantly enhances system robustness, interpretability, and decision trustworthiness, establishing a verifiable uncertainty assurance mechanism for dynamic autonomous driving scenarios.

Technology Category

Application Category

📝 Abstract
Collaborative Object Detection (COD) and collaborative perception can integrate data or features from various entities, and improve object detection accuracy compared with individual perception. However, adversarial attacks pose a potential threat to the deep learning COD models, and introduce high output uncertainty. With unknown attack models, it becomes even more challenging to improve COD resiliency and quantify the output uncertainty for highly dynamic perception scenes such as autonomous vehicles. In this study, we propose the Trusted Uncertainty Quantification in Collaborative Perception framework (TUQCP). TUQCP leverages both adversarial training and uncertainty quantification techniques to enhance the adversarial robustness of existing COD models. More specifically, TUQCP first adds perturbations to the shared information of randomly selected agents during object detection collaboration by adversarial training. TUQCP then alleviates the impacts of adversarial attacks by providing output uncertainty estimation through learning-based module and uncertainty calibration through conformal prediction. Our framework works for early and intermediate collaboration COD models and single-agent object detection models. We evaluate TUQCP on V2X-Sim, a comprehensive collaborative perception dataset for autonomous driving, and demonstrate a 80.41% improvement in object detection accuracy compared to the baselines under the same adversarial attacks. TUQCP demonstrates the importance of uncertainty quantification to COD under adversarial attacks.
Problem

Research questions and friction points this paper is trying to address.

Quantify uncertainty in collaborative object detection
Enhance adversarial robustness in COD models
Improve accuracy under adversarial attacks in autonomous vehicles
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adversarial training enhances robustness
Uncertainty quantification improves accuracy
Conformal prediction calibrates uncertainty
🔎 Similar Papers
No similar papers found.