🤖 AI Summary
Visual place recognition (VPR) exhibits insufficient robustness against adversarial attacks, posing critical safety risks to robot navigation. Method: This work presents the first systematic evaluation of VPR-specific adversarial attacks (e.g., FGSM), revealing their severe degradation of localization accuracy; proposes a closed-loop defense framework integrating VPR, a lightweight adversarial attack detector (AAD), and proactive navigation decision-making; and establishes the first taxonomy and benchmark for VPR adversarial attacks and detection. Contribution/Results: Empirical evaluation demonstrates that an AAD achieving only 75% detection rate and 25% false positive rate reduces trajectory-aligned localization error by approximately 50% and significantly shortens the duration of unsafe navigation states. The study delivers a reproducible evaluation paradigm and quantitative design guidelines for trustworthy autonomous navigation systems.
📝 Abstract
Stand-alone Visual Place Recognition (VPR) systems have little defence against a well-designed adversarial attack, which can lead to disastrous consequences when deployed for robot navigation. This paper extensively analyzes the effect of four adversarial attacks common in other perception tasks and four novel VPR-specific attacks on VPR localization performance. We then propose how to close the loop between VPR, an Adversarial Attack Detector (AAD), and active navigation decisions by demonstrating the performance benefit of simulated AADs in a novel experiment paradigm -- which we detail for the robotics community to use as a system framework. In the proposed experiment paradigm, we see the addition of AADs across a range of detection accuracies can improve performance over baseline; demonstrating a significant improvement -- such as a ~50% reduction in the mean along-track localization error -- can be achieved with True Positive and False Positive detection rates of only 75% and up to 25% respectively. We examine a variety of metrics including: Along-Track Error, Percentage of Time Attacked, Percentage of Time in an `Unsafe'State, and Longest Continuous Time Under Attack. Expanding further on these results, we provide the first investigation into the efficacy of the Fast Gradient Sign Method (FGSM) adversarial attack for VPR. The analysis in this work highlights the need for AADs in real-world systems for trustworthy navigation, and informs quantitative requirements for system design.