🤖 AI Summary
This work addresses the security vulnerability of LiDAR-SLAM systems under adversarial point injection (PiJ) attacks. We propose SLACK, the first end-to-end deep generative adversarial model for this threat. SLACK injects a minimal number (<0.1%) of carefully optimized adversarial points into raw LiDAR scans, significantly degrading SLAM localization and mapping performance while preserving both visual and metric fidelity of the original point cloud. Key contributions include: (1) a lightweight point cloud autoencoder integrating segmentation-guided attention and contrastive learning, enabling high-fidelity reconstruction and precise adversarial point placement; and (2) the first systematic study of adversarial attacks against learning-based LiDAR-SLAM systems. Evaluated on KITTI and CARLA-64 datasets, SLACK increases pose estimation error by 3.2× and reduces map completeness by over 60%, all while remaining imperceptible to human observers and standard quality metrics.
📝 Abstract
The widespread adoption of learning-based methods for the LiDAR makes autonomous vehicles vulnerable to adversarial attacks through adversarial point injections (PiJ). It poses serious security challenges for navigation and map generation. Despite its critical nature, no major work exists that studies learning-based attacks on LiDAR-based SLAM. Our work proposes SLACK, an end-to-end deep generative adversarial model to attack LiDAR scans with several point injections without deteriorating LiDAR quality. To facilitate SLACK, we design a novel yet simple autoencoder that augments contrastive learning with segmentation-based attention for precise reconstructions. SLACK demonstrates superior performance on the task of point injections (PiJ) compared to the best baselines on KITTI and CARLA-64 dataset while maintaining accurate scan quality. We qualitatively and quantitatively demonstrate PiJ attacks using a fraction of LiDAR points. It severely degrades navigation and map quality without deteriorating the LiDAR scan quality.