🤖 AI Summary
Trajectory generation and selection remain challenging for map-free, outdoor human-centered navigation. Method: This paper proposes a zero-shot, human-like navigation approach integrating traversability modeling and vision-language understanding. It employs a conditional variational autoencoder (CVAE) to generate multiple candidate trajectories compliant with scene-specific traversability constraints, and introduces a vision-language model (VLM) for the first time to perform zero-shot, context-aware optimal trajectory selection—enhanced by visual prompt learning for improved semantic alignment. Contribution/Results: The core innovation lies in leveraging VLM-driven semantic reasoning for map-free trajectory decision-making, enabling implicit modeling of human intent and cross-scene generalization. Experiments across four representative outdoor scenarios demonstrate a 20.81% improvement in traversability satisfaction rate and a 28.51% increase in human-like path quality, significantly outperforming existing global navigation methods.
📝 Abstract
We present a multi-modal trajectory generation and selection algorithm for real-world mapless outdoor navigation in human-centered environments. Such environments contain rich features like crosswalks, grass, and curbs, which are easily interpretable by humans, but not by mobile robots. We aim to compute suitable trajectories that (1) satisfy the environment-specific traversability constraints and (2) generate human-like paths while navigating on crosswalks, sidewalks, etc. Our formulation uses a Conditional Variational Autoencoder (CVAE) generative model enhanced with traversability constraints to generate multiple candidate trajectories for global navigation. We develop a visual prompting approach and leverage the Visual Language Model's (VLM) zero-shot ability of semantic understanding and logical reasoning to choose the best trajectory given the contextual information about the task. We evaluate our method in various outdoor scenes with wheeled robots and compare the performance with other global navigation algorithms. In practice, we observe an average improvement of 20.81% in satisfying traversability constraints and 28.51% in terms of human-like navigation in four different outdoor navigation scenarios.