🤖 AI Summary
To address the challenge of long-horizon navigation for urban micromobility agents (e.g., delivery robots) in large-scale, dynamic, and unstructured environments, this paper proposes a vision-language-action joint modeling framework. Methodologically, it introduces an explicitly aligned navigation architecture that grounds noisy route points in multi-scale visual observations, integrating high-level semantic planning with low-level motion control. It further establishes a two-stage paradigm—“video trajectory parsing → simulation pretraining → real-world fine-tuning”—synergistically leveraging web-sourced video data, simulated trajectories, and real-world demonstrations via supervised fine-tuning (SFT) and reinforcement fine-tuning (RFT). Evaluated on the MetaUrban SocialNav benchmark, our method outperforms strong baselines by over 55% in navigation success rate. Moreover, extensive real-world deployment on urban streets validates its scalability and high robustness under complex, dynamic conditions.
📝 Abstract
Urban micromobility applications, such as delivery robots, demand reliable navigation across large-scale urban environments while following long-horizon route instructions. This task is particularly challenging due to the dynamic and unstructured nature of real-world city areas, yet most existing navigation methods remain tailored to short-scale and controllable scenarios. Effective urban micromobility requires two complementary levels of navigation skills: low-level capabilities such as point-goal reaching and obstacle avoidance, and high-level capabilities, such as route-visual alignment. To this end, we propose UrbanVLA, a route-conditioned Vision-Language-Action (VLA) framework designed for scalable urban navigation. Our method explicitly aligns noisy route waypoints with visual observations during execution, and subsequently plans trajectories to drive the robot. To enable UrbanVLA to master both levels of navigation, we employ a two-stage training pipeline. The process begins with Supervised Fine-Tuning (SFT) using simulated environments and trajectories parsed from web videos. This is followed by Reinforcement Fine-Tuning (RFT) on a mixture of simulation and real-world data, which enhances the model's safety and adaptability in real-world settings. Experiments demonstrate that UrbanVLA surpasses strong baselines by more than 55% in the SocialNav task on MetaUrban. Furthermore, UrbanVLA achieves reliable real-world navigation, showcasing both scalability to large-scale urban environments and robustness against real-world uncertainties.