All Vehicles Can Lie: Efficient Adversarial Defense in Fully Untrusted-Vehicle Collaborative Perception via Pseudo-Random Bayesian Inference

📅 2026-03-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the security challenges posed by adversarial attacks in fully untrusted collaborative vehicular perception by proposing the first efficient defense framework that operates without assuming any trusted ego vehicle. Leveraging pseudo-random grouping, Bayesian inference, and spatiotemporal consistency analysis, the method utilizes reliable perception from the previous frame as a dynamic reference to detect and identify malicious vehicles through an extremely lightweight verification mechanism—requiring only 2.5 checks per frame on average. Evaluated across diverse scenarios, the approach restores perception accuracy to 79.4%–86.9% of pre-attack levels, significantly outperforming existing defenses while demonstrating strong generalization and practical applicability.

Technology Category

Application Category

📝 Abstract
Collaborative perception (CP) enables multiple vehicles to augment their individual perception capacities through the exchange of feature-level sensory data. However, this fusion mechanism is inherently vulnerable to adversarial attacks, especially in fully untrusted-vehicle environments. Existing defense approaches often assume a trusted ego vehicle as a reference or incorporate additional binary classifiers. These assumptions limit their practicality in real-world deployments due to the questionable trustworthiness of ego vehicles, the requirement for real-time detection, and the need for generalizability across diverse scenarios. To address these challenges, we propose a novel Pseudo-Random Bayesian Inference (PRBI) framework, a first efficient defense method tailored for fully untrusted-vehicle CP. PRBI detects adversarial behavior by leveraging temporal perceptual discrepancies, using the reliable perception from the preceding frame as a dynamic reference. Additionally, it employs a pseudo-random grouping strategy that requires only two verifications per frame, while applying Bayesian inference to estimate both the number and identities of malicious vehicles. Theoretical analysis has proven the convergence and stability of the proposed PRBI framework. Extensive experiments show that PRBI requires only 2.5 verifications per frame on average, outperforming existing methods significantly, and restores detection precision to between 79.4% and 86.9% of pre-attack levels.
Problem

Research questions and friction points this paper is trying to address.

collaborative perception
adversarial defense
untrusted vehicles
feature-level fusion
real-time detection
Innovation

Methods, ideas, or system contributions that make the work stand out.

Pseudo-Random Bayesian Inference
Collaborative Perception
Adversarial Defense
Untrusted Vehicles
Temporal Discrepancy
🔎 Similar Papers
No similar papers found.
Yi Yu
Yi Yu
Graduate School of Advanced Science and Engineering at Hiroshima University
Multimodal learningGenerative modelingMultimediaAI Music
Libing Wu
Libing Wu
wuhan university
Z
Zhuangzhuang Zhang
School of Cyber Science and Engineering, Wuhan University
Jing Qiu
Jing Qiu
Guangzhou University, Pengcheng Laboratory
AI Application and Security
L
Lijuan Huo
School of Cyber Science and Engineering, Wuhan University
J
Jiaqi Feng
School of Cyber Science and Engineering, Wuhan University