How Secure Are Large Language Models (LLMs) for Navigation in Urban Environments?

📅 2024-02-14
🏛️ arXiv.org
📈 Citations: 5
Influential: 0
📄 PDF
🤖 AI Summary
This work systematically uncovers, for the first time, critical safety vulnerabilities of large language models (LLMs) in urban outdoor navigation—posing tangible risks to safety-critical applications such as autonomous driving and logistics. Method: We propose Navigation Prompt Injection (NPI/NPS), a novel prompt-level adversarial attack paradigm tailored to urban street-view inputs, which manipulates LLMs’ path-decision outputs via subtle input perturbations. We design both white-box and black-box attack variants with strong generalizability across models and datasets, and introduce NPE, a preliminary defense mechanism. Contribution/Results: Evaluated on Touchdown and Map2Seq across multiple LLM-based navigation models under few-shot and fine-tuned settings, NPI/NPS significantly degrades all seven navigation metrics and exhibits strong cross-model transferability; NPE demonstrably improves robustness. Our study provides the first empirical evidence of real-world safety risks in LLM-powered navigation systems and establishes the inaugural benchmark framework for securing them.

Technology Category

Application Category

📝 Abstract
In the field of robotics and automation, navigation systems based on Large Language Models (LLMs) have recently demonstrated impressive performance. However, the security aspects of these systems have received relatively less attention. This paper pioneers the exploration of vulnerabilities in LLM-based navigation models in urban outdoor environments, a critical area given the widespread application of this technology in autonomous driving, logistics, and emergency services. Specifically, we introduce a novel Navigational Prompt Attack that manipulates LLM-based navigation models by perturbing the original navigational prompt, leading to incorrect actions. Based on the method of perturbation, our attacks are divided into two types: Navigational Prompt Insert (NPI) Attack and Navigational Prompt Swap (NPS) Attack. We conducted comprehensive experiments on an LLM-based navigation model that employs various LLMs for reasoning. Our results, derived from the Touchdown and Map2Seq street-view datasets under both few-shot learning and fine-tuning configurations, demonstrate notable performance declines across seven metrics in the face of both white-box and black-box attacks. Moreover, our attacks can be easily extended to other LLM-based navigation models with similarly effective results. These findings highlight the generalizability and transferability of the proposed attack, emphasizing the need for enhanced security in LLM-based navigation systems. As an initial countermeasure, we propose the Navigational Prompt Engineering (NPE) Defense strategy, which concentrates on navigation-relevant keywords to reduce the impact of adversarial attacks. While initial findings indicate that this strategy enhances navigational safety, there remains a critical need for the wider research community to develop stronger defense methods to effectively tackle the real-world challenges faced by these systems.
Problem

Research questions and friction points this paper is trying to address.

Explores vulnerabilities in LLM-based urban navigation systems
Introduces Navigational Prompt Attack disrupting model actions
Proposes defense strategy to mitigate adversarial attacks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces Navigational Prompt Attack techniques
Proposes Navigational Prompt Engineering Defense
Tests attacks on LLM-based navigation models
🔎 Similar Papers
No similar papers found.
C
Congcong Wen
Electrical and Computer Engineering, New York University Abu Dhabi, UAE.
J
Jiazhao Liang
Electrical and Computer Engineering, New York University Abu Dhabi, UAE.
S
Shuaihang Yuan
Electrical and Computer Engineering, New York University Abu Dhabi, UAE.
H
Hao Huang
Center for Artificial Intelligence and Robotics, New York University Abu Dhabi, UAE
Y
Yi Fang
Tandon School of Engineering, New York University, New York, United States.