LLM-A*: Large Language Model Enhanced Incremental Heuristic Search on Path Planning

📅 2024-06-20
🏛️ Conference on Empirical Methods in Natural Language Processing
📈 Citations: 14
Influential: 2
📄 PDF
🤖 AI Summary
Traditional pathfinding algorithms (e.g., A*) suffer from prohibitive computational and memory overhead in large-scale environments, while large language models (LLMs) lack precise spatiotemporal reasoning capabilities. To address this, we propose an LLM-augmented incremental A* framework wherein the LLM serves as a dynamic heuristic generator, integrating environmental semantic encoding and contextual reasoning to guide A*’s local search. Our approach establishes the first tightly coupled integration of LLMs with classical graph search, preserving path validity and asymptotic optimality while enabling efficient, memory-light online planning. Experiments on large-scale grid maps and real-world road networks demonstrate that our method reduces search time by 47% and memory consumption by 63% compared to A* and Theta*, while maintaining 100% path validity.

Technology Category

Application Category

📝 Abstract
Path planning is a fundamental scientific problem in robotics and autonomous navigation, requiring the derivation of efficient routes from starting to destination points while avoiding obstacles. Traditional algorithms like A* and its variants are capable of ensuring path validity but suffer from significant computational and memory inefficiencies as the state space grows. Conversely, large language models (LLMs) excel in broader environmental analysis through contextual understanding, providing global insights into environments. However, they fall short in detailed spatial and temporal reasoning, often leading to invalid or inefficient routes. In this work, we propose LLM-A*, an new LLM based route planning method that synergistically combines the precise pathfinding capabilities of A* with the global reasoning capability of LLMs. This hybrid approach aims to enhance pathfinding efficiency in terms of time and space complexity while maintaining the integrity of path validity, especially in large-scale scenarios. By integrating the strengths of both methodologies, LLM-A* addresses the computational and memory limitations of conventional algorithms without compromising on the validity required for effective pathfinding.
Problem

Research questions and friction points this paper is trying to address.

Combining A* and LLMs to improve path planning efficiency
Addressing computational and memory inefficiencies in large-scale pathfinding
Ensuring path validity while leveraging global environmental insights
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combines A* precision with LLM global reasoning
Enhances pathfinding efficiency in large scenarios
Maintains path validity while reducing complexity
S
Silin Meng
University of California, Los Angeles
Y
Yiwei Wang
University of California, Los Angeles
Cheng-Fu Yang
Cheng-Fu Yang
UCLA
Multimodal LearningVision and Language
N
Nanyun Peng
University of California, Los Angeles
K
Kai-Wei Chang
University of California, Los Angeles