AdvSwap: Covert Adversarial Perturbation with High Frequency Info-swapping for Autonomous Driving Perception

📅 2025-02-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Autonomous driving perception modules are vulnerable to adversarial attacks, while existing global-noise methods suffer from poor imperceptibility. To address this, we propose AdvSwap—a wavelet-transform-based reversible high-frequency information swapping attack. AdvSwap is the first method to integrate selective wavelet high-frequency component swapping with invertible neural networks (INNs), enabling implicit, label-erasing, and guidance-image-fusion-driven attacks. It preserves semantic integrity of original images while generating adversarial examples imperceptible to human vision and robustly evading mainstream object detectors. Evaluated on GTSRB and nuScenes, AdvSwap achieves high attack success rates against traffic signs and vehicle detection, demonstrating strong cross-model transferability and robustness under common corruptions. This work establishes a new paradigm for visual security assessment in autonomous driving systems.

Technology Category

Application Category

📝 Abstract
Perception module of Autonomous vehicles (AVs) are increasingly susceptible to be attacked, which exploit vulnerabilities in neural networks through adversarial inputs, thereby compromising the AI safety. Some researches focus on creating covert adversarial samples, but existing global noise techniques are detectable and difficult to deceive the human visual system. This paper introduces a novel adversarial attack method, AdvSwap, which creatively utilizes wavelet-based high-frequency information swapping to generate covert adversarial samples and fool the camera. AdvSwap employs invertible neural network for selective high-frequency information swapping, preserving both forward propagation and data integrity. The scheme effectively removes the original label data and incorporates the guidance image data, producing concealed and robust adversarial samples. Experimental evaluations and comparisons on the GTSRB and nuScenes datasets demonstrate that AdvSwap can make concealed attacks on common traffic targets. The generates adversarial samples are also difficult to perceive by humans and algorithms. Meanwhile, the method has strong attacking robustness and attacking transferability.
Problem

Research questions and friction points this paper is trying to address.

Generates covert adversarial samples for AVs
Utilizes high-frequency info-swapping technique
Enhances attack robustness and transferability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Wavelet-based high-frequency info-swapping
Invertible neural network for data integrity
Covert adversarial samples for perception attacks
🔎 Similar Papers
No similar papers found.
Y
Yuanhao Huang
School of Transportation Science and Engineering, Beihang University, Beijing, 100191, P.R.China, and State Key Lab of Intelligent Transportation System, Beijing, 100191, P.R.China
Q
Qinfan Zhang
School of Transportation Science and Engineering, Beihang University, Beijing, 100191, P.R.China, and State Key Lab of Intelligent Transportation System, Beijing, 100191, P.R.China
J
Jiandong Xing
School of Transportation Science and Engineering, Beihang University, Beijing, 100191, P.R.China, and State Key Lab of Intelligent Transportation System, Beijing, 100191, P.R.China
M
Mengyue Cheng
School of Transportation Science and Engineering, Beihang University, Beijing, 100191, P.R.China, and State Key Lab of Intelligent Transportation System, Beijing, 100191, P.R.China
H
Haiyang Yu
School of Transportation Science and Engineering, Beihang University, Beijing, 100191, China, and Zhongguancun Laboratory, Beijing 100094, P.R.China
Yilong Ren
Yilong Ren
Associate Professor, School of Transportation Science and Engineering, Beihang University
Cooperative vehicle infrastructure systemTraffic big dataTraffic signal control
Xiao Xiong
Xiao Xiong
Nankai University
Failure Diagnosis