🤖 AI Summary
LiDAR-based localization systems are vulnerable to physical adversarial attacks, yet no practical attack framework targeting localization tasks has been proposed. Method: This paper introduces the first implementable adversarial attack framework specifically designed for LiDAR localization. By reverse-engineering point cloud registration models (HRegNet, D3Feat, GeoTransformer), we identify Top-K critical regions; then, using near-infrared-absorbing materials, we physically occlude corresponding LiDAR reflection points in real-world scenes to disrupt keypoint matching and pose estimation. Contribution/Results: This work pioneers the extension of adversarial attacks from 3D perception to high-precision localization. Evaluated on the KITTI dataset and Autoware platform, the attack induces significant localization drift—increasing average pose error by 3–8×—by occluding less than 0.5% of the point cloud. The physical attack is robustly reproducible, revealing a substantive real-world security vulnerability in autonomous driving localization modules.
📝 Abstract
Deep learning models have been shown to be susceptible to adversarial attacks with visually imperceptible perturbations. Even this poses a serious security challenge for the localization of self-driving cars, there has been very little exploration of attack on it, as most of adversarial attacks have been applied to 3D perception. In this work, we propose a novel adversarial attack framework called DisorientLiDAR targeting LiDAR-based localization. By reverse-engineering localization models (e.g., feature extraction networks), adversaries can identify critical keypoints and strategically remove them, thereby disrupting LiDAR-based localization. Our proposal is first evaluated on three state-of-the-art point-cloud registration models (HRegNet, D3Feat, and GeoTransformer) using the KITTI dataset. Experimental results demonstrate that removing regions containing Top-K keypoints significantly degrades their registration accuracy. We further validate the attack's impact on the Autoware autonomous driving platform, where hiding merely a few critical regions induces noticeable localization drift. Finally, we extended our attacks to the physical world by hiding critical regions with near-infrared absorptive materials, thereby successfully replicate the attack effects observed in KITTI data. This step has been closer toward the realistic physical-world attack that demonstrate the veracity and generality of our proposal.