Adversarial Attacks and Detection in Visual Place Recognition for Safer Robot Navigation

📅 2025-06-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Visual place recognition (VPR) exhibits insufficient robustness against adversarial attacks, posing critical safety risks to robot navigation. Method: This work presents the first systematic evaluation of VPR-specific adversarial attacks (e.g., FGSM), revealing their severe degradation of localization accuracy; proposes a closed-loop defense framework integrating VPR, a lightweight adversarial attack detector (AAD), and proactive navigation decision-making; and establishes the first taxonomy and benchmark for VPR adversarial attacks and detection. Contribution/Results: Empirical evaluation demonstrates that an AAD achieving only 75% detection rate and 25% false positive rate reduces trajectory-aligned localization error by approximately 50% and significantly shortens the duration of unsafe navigation states. The study delivers a reproducible evaluation paradigm and quantitative design guidelines for trustworthy autonomous navigation systems.

Technology Category

Application Category

📝 Abstract
Stand-alone Visual Place Recognition (VPR) systems have little defence against a well-designed adversarial attack, which can lead to disastrous consequences when deployed for robot navigation. This paper extensively analyzes the effect of four adversarial attacks common in other perception tasks and four novel VPR-specific attacks on VPR localization performance. We then propose how to close the loop between VPR, an Adversarial Attack Detector (AAD), and active navigation decisions by demonstrating the performance benefit of simulated AADs in a novel experiment paradigm -- which we detail for the robotics community to use as a system framework. In the proposed experiment paradigm, we see the addition of AADs across a range of detection accuracies can improve performance over baseline; demonstrating a significant improvement -- such as a ~50% reduction in the mean along-track localization error -- can be achieved with True Positive and False Positive detection rates of only 75% and up to 25% respectively. We examine a variety of metrics including: Along-Track Error, Percentage of Time Attacked, Percentage of Time in an `Unsafe'State, and Longest Continuous Time Under Attack. Expanding further on these results, we provide the first investigation into the efficacy of the Fast Gradient Sign Method (FGSM) adversarial attack for VPR. The analysis in this work highlights the need for AADs in real-world systems for trustworthy navigation, and informs quantitative requirements for system design.
Problem

Research questions and friction points this paper is trying to address.

Analyzing adversarial attacks on Visual Place Recognition systems
Proposing Adversarial Attack Detector for safer robot navigation
Investigating Fast Gradient Sign Method efficacy in VPR
Innovation

Methods, ideas, or system contributions that make the work stand out.

Proposes Adversarial Attack Detector for VPR
Uses Fast Gradient Sign Method analysis
Demonstrates 50% error reduction with AAD
🔎 Similar Papers
No similar papers found.
C
Connor Malone
QUT Centre for Robotics, School of Electrical Engineering and Robotics at the Queensland University of Technology, Brisbane, Australia
O
Owen Claxton
QUT Centre for Robotics, School of Electrical Engineering and Robotics at the Queensland University of Technology, Brisbane, Australia
Iman Shames
Iman Shames
Australian National University
OptimizationSignal ProcessingControl SystemsControl TheoryAutonomous Systems
Michael Milford
Michael Milford
QUT Professor | Director, QUT Robotics Centre | ARC Laureate Fellow | Microsoft Fellow
Roboticscomputational neurosciencenavigationSLAMRatSLAM