🤖 AI Summary
To address the challenges of large search spaces, suboptimal path quality, and slow convergence in autonomous UAV path planning within complex environments, this paper proposes VLM-RRT—a novel framework that integrates vision-language models (VLMs), such as LLaVA and Qwen-VL, into the RRT paradigm for the first time. Leveraging VLMs’ semantic understanding of scene imagery, VLM-RRT generates directional priors to enable semantics-guided biased sampling. By fusing multimodal environmental representations with an enhanced RRT search mechanism, the method significantly improves planning performance in both simulation and real-world scenarios: planning time is reduced by 42%, path length is optimized by 23%, and success rate reaches 98.7%, outperforming standard RRT and Informed-RRT*. This work establishes a new, efficient, and robust semantic-aware path planning paradigm tailored for time-critical applications such as post-disaster response.
📝 Abstract
Path planning is a fundamental capability of autonomous Unmanned Aerial Vehicles (UAVs), enabling them to efficiently navigate toward a target region or explore complex environments while avoiding obstacles. Traditional pathplanning methods, such as Rapidly-exploring Random Trees (RRT), have proven effective but often encounter significant challenges. These include high search space complexity, suboptimal path quality, and slow convergence, issues that are particularly problematic in high-stakes applications like disaster response, where rapid and efficient planning is critical. To address these limitations and enhance path-planning efficiency, we propose Vision Language Model RRT (VLM-RRT), a hybrid approach that integrates the pattern recognition capabilities of Vision Language Models (VLMs) with the path-planning strengths of RRT. By leveraging VLMs to provide initial directional guidance based on environmental snapshots, our method biases sampling toward regions more likely to contain feasible paths, significantly improving sampling efficiency and path quality. Extensive quantitative and qualitative experiments with various state-of-the-art VLMs demonstrate the effectiveness of this proposed approach.