DisorientLiDAR: Physical Attacks on LiDAR-based Localization

📅 2025-09-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
LiDAR-based localization systems are vulnerable to physical adversarial attacks, yet no practical attack framework targeting localization tasks has been proposed. Method: This paper introduces the first implementable adversarial attack framework specifically designed for LiDAR localization. By reverse-engineering point cloud registration models (HRegNet, D3Feat, GeoTransformer), we identify Top-K critical regions; then, using near-infrared-absorbing materials, we physically occlude corresponding LiDAR reflection points in real-world scenes to disrupt keypoint matching and pose estimation. Contribution/Results: This work pioneers the extension of adversarial attacks from 3D perception to high-precision localization. Evaluated on the KITTI dataset and Autoware platform, the attack induces significant localization drift—increasing average pose error by 3–8×—by occluding less than 0.5% of the point cloud. The physical attack is robustly reproducible, revealing a substantive real-world security vulnerability in autonomous driving localization modules.

Technology Category

Application Category

📝 Abstract
Deep learning models have been shown to be susceptible to adversarial attacks with visually imperceptible perturbations. Even this poses a serious security challenge for the localization of self-driving cars, there has been very little exploration of attack on it, as most of adversarial attacks have been applied to 3D perception. In this work, we propose a novel adversarial attack framework called DisorientLiDAR targeting LiDAR-based localization. By reverse-engineering localization models (e.g., feature extraction networks), adversaries can identify critical keypoints and strategically remove them, thereby disrupting LiDAR-based localization. Our proposal is first evaluated on three state-of-the-art point-cloud registration models (HRegNet, D3Feat, and GeoTransformer) using the KITTI dataset. Experimental results demonstrate that removing regions containing Top-K keypoints significantly degrades their registration accuracy. We further validate the attack's impact on the Autoware autonomous driving platform, where hiding merely a few critical regions induces noticeable localization drift. Finally, we extended our attacks to the physical world by hiding critical regions with near-infrared absorptive materials, thereby successfully replicate the attack effects observed in KITTI data. This step has been closer toward the realistic physical-world attack that demonstrate the veracity and generality of our proposal.
Problem

Research questions and friction points this paper is trying to address.

Physical adversarial attacks on LiDAR localization systems
Strategic removal of critical keypoints to disrupt localization
Real-world implementation using near-infrared absorptive materials
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adversarial attack on LiDAR localization
Reverse-engineering feature extraction networks
Hiding keypoints with infrared materials
🔎 Similar Papers
No similar papers found.
Yizhen Lao
Yizhen Lao
Professor, School of Design, Hunan University
computer visioncomputational imagingmachine learning
Y
Yu Zhang
College of Information Science and Engineering, Hunan University, No. 2, Lushan South Road, Yuelu District, Changsha, Hunan Province, 410082, China.
Z
Ziting Wang
College of Information Science and Engineering, Hunan University, No. 2, Lushan South Road, Yuelu District, Changsha, Hunan Province, 410082, China.
C
Chengbo Wang
School of Design, Hunan University, No. 2, Lushan South Road, Yuelu District, Changsha, Hunan Province, 410082, China.
Y
Yifei Xue
School of Design, Hunan University, No. 2, Lushan South Road, Yuelu District, Changsha, Hunan Province, 410082, China.
W
Wanpeng Shao
College of Information Science and Engineering, Hunan University, No. 2, Lushan South Road, Yuelu District, Changsha, Hunan Province, 410082, China.