SLACK: Attacking LiDAR-Based SLAM with Adversarial Point Injections

📅 2024-10-27
🏛️ 2024 IEEE International Conference on Image Processing Challenges and Workshops (ICIPCW)
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the security vulnerability of LiDAR-SLAM systems under adversarial point injection (PiJ) attacks. We propose SLACK, the first end-to-end deep generative adversarial model for this threat. SLACK injects a minimal number (<0.1%) of carefully optimized adversarial points into raw LiDAR scans, significantly degrading SLAM localization and mapping performance while preserving both visual and metric fidelity of the original point cloud. Key contributions include: (1) a lightweight point cloud autoencoder integrating segmentation-guided attention and contrastive learning, enabling high-fidelity reconstruction and precise adversarial point placement; and (2) the first systematic study of adversarial attacks against learning-based LiDAR-SLAM systems. Evaluated on KITTI and CARLA-64 datasets, SLACK increases pose estimation error by 3.2× and reduces map completeness by over 60%, all while remaining imperceptible to human observers and standard quality metrics.

Technology Category

Application Category

📝 Abstract
The widespread adoption of learning-based methods for the LiDAR makes autonomous vehicles vulnerable to adversarial attacks through adversarial point injections (PiJ). It poses serious security challenges for navigation and map generation. Despite its critical nature, no major work exists that studies learning-based attacks on LiDAR-based SLAM. Our work proposes SLACK, an end-to-end deep generative adversarial model to attack LiDAR scans with several point injections without deteriorating LiDAR quality. To facilitate SLACK, we design a novel yet simple autoencoder that augments contrastive learning with segmentation-based attention for precise reconstructions. SLACK demonstrates superior performance on the task of point injections (PiJ) compared to the best baselines on KITTI and CARLA-64 dataset while maintaining accurate scan quality. We qualitatively and quantitatively demonstrate PiJ attacks using a fraction of LiDAR points. It severely degrades navigation and map quality without deteriorating the LiDAR scan quality.
Problem

Research questions and friction points this paper is trying to address.

Adversarial attacks on LiDAR-based SLAM via point injections
Security threats to autonomous vehicle navigation and mapping
Lack of prior work on learning-based LiDAR SLAM attacks
Innovation

Methods, ideas, or system contributions that make the work stand out.

End-to-end deep generative adversarial model
Autoencoder with contrastive learning and segmentation attention
Adversarial point injections without quality deterioration
🔎 Similar Papers
No similar papers found.