Black-box Adversarial Attacks on CNN-based SLAM Algorithms

๐Ÿ“… 2025-05-30
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work exposes a critical vulnerability of CNN-based SLAM systems (e.g., GCN-SLAM) to black-box adversarial attacks. Addressing the lack of systematic robustness evaluation of feature detectors in prior work, we design and validate black-box adversarial perturbations at both RGB and depth image input levels, specifically targeting the feature detection module. Experiments on the TUM dataset reveal that moderate-strength RGB attacks cause tracking failure in 76% of frames, while depth-image attacks induce catastrophic system-level failures. Our key contributions are: (1) the first systematic demonstration of structural fragility in CNN-SLAM feature detectors under black-box settings; and (2) empirical evidence that depth modality is significantly more vulnerable than RGBโ€”providing crucial insights for secure multi-modal SLAM design. These findings underscore the urgent need for robustness-aware architectures in vision-based localization systems.

Technology Category

Application Category

๐Ÿ“ Abstract
Continuous advancements in deep learning have led to significant progress in feature detection, resulting in enhanced accuracy in tasks like Simultaneous Localization and Mapping (SLAM). Nevertheless, the vulnerability of deep neural networks to adversarial attacks remains a challenge for their reliable deployment in applications, such as navigation of autonomous agents. Even though CNN-based SLAM algorithms are a growing area of research there is a notable absence of a comprehensive presentation and examination of adversarial attacks targeting CNN-based feature detectors, as part of a SLAM system. Our work introduces black-box adversarial perturbations applied to the RGB images fed into the GCN-SLAM algorithm. Our findings on the TUM dataset [30] reveal that even attacks of moderate scale can lead to tracking failure in as many as 76% of the frames. Moreover, our experiments highlight the catastrophic impact of attacking depth instead of RGB input images on the SLAM system.
Problem

Research questions and friction points this paper is trying to address.

Examines adversarial attacks on CNN-based SLAM feature detectors
Introduces black-box perturbations affecting GCN-SLAM algorithm performance
Demonstrates catastrophic impact of attacks on depth versus RGB inputs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Black-box adversarial attacks on CNN-SLAM
Perturbations applied to RGB input images
Attacking depth causes catastrophic SLAM failure
๐Ÿ”Ž Similar Papers
No similar papers found.
M
M. Gkeka
Department of Electrical and Computer Engineering, University of Thessaly, Greece
B
Bowen Sun
Department of Computer Science, William & Mary, USA
Evgenia Smirni
Evgenia Smirni
Professor of Computer Science, College of William and Mary
Performance EvaluationReliability
C
C. Antonopoulos
Department of Electrical and Computer Engineering, University of Thessaly, Greece
S
S. Lalis
Department of Electrical and Computer Engineering, University of Thessaly, Greece
Nikolaos Bellas
Nikolaos Bellas
Professor, Department of Electrical and Computer Engineering, University of Thessaly
Reconfigurable ComputingEmbedded SystemsCAD toolsComputer ArchitectureApproximate Computing