UrbanVLA: A Vision-Language-Action Model for Urban Micromobility

📅 2025-10-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of long-horizon navigation for urban micromobility agents (e.g., delivery robots) in large-scale, dynamic, and unstructured environments, this paper proposes a vision-language-action joint modeling framework. Methodologically, it introduces an explicitly aligned navigation architecture that grounds noisy route points in multi-scale visual observations, integrating high-level semantic planning with low-level motion control. It further establishes a two-stage paradigm—“video trajectory parsing → simulation pretraining → real-world fine-tuning”—synergistically leveraging web-sourced video data, simulated trajectories, and real-world demonstrations via supervised fine-tuning (SFT) and reinforcement fine-tuning (RFT). Evaluated on the MetaUrban SocialNav benchmark, our method outperforms strong baselines by over 55% in navigation success rate. Moreover, extensive real-world deployment on urban streets validates its scalability and high robustness under complex, dynamic conditions.

Technology Category

Application Category

📝 Abstract
Urban micromobility applications, such as delivery robots, demand reliable navigation across large-scale urban environments while following long-horizon route instructions. This task is particularly challenging due to the dynamic and unstructured nature of real-world city areas, yet most existing navigation methods remain tailored to short-scale and controllable scenarios. Effective urban micromobility requires two complementary levels of navigation skills: low-level capabilities such as point-goal reaching and obstacle avoidance, and high-level capabilities, such as route-visual alignment. To this end, we propose UrbanVLA, a route-conditioned Vision-Language-Action (VLA) framework designed for scalable urban navigation. Our method explicitly aligns noisy route waypoints with visual observations during execution, and subsequently plans trajectories to drive the robot. To enable UrbanVLA to master both levels of navigation, we employ a two-stage training pipeline. The process begins with Supervised Fine-Tuning (SFT) using simulated environments and trajectories parsed from web videos. This is followed by Reinforcement Fine-Tuning (RFT) on a mixture of simulation and real-world data, which enhances the model's safety and adaptability in real-world settings. Experiments demonstrate that UrbanVLA surpasses strong baselines by more than 55% in the SocialNav task on MetaUrban. Furthermore, UrbanVLA achieves reliable real-world navigation, showcasing both scalability to large-scale urban environments and robustness against real-world uncertainties.
Problem

Research questions and friction points this paper is trying to address.

Navigation for urban micromobility robots in dynamic environments
Aligning route instructions with visual observations during execution
Mastering low-level obstacle avoidance and high-level route planning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Aligns route waypoints with visual observations
Employs two-stage training with SFT and RFT
Enables scalable urban navigation with VLA framework
🔎 Similar Papers
No similar papers found.
A
Anqi Li
Peking University
Z
Zhiyong Wang
Galbot
Jiazhao Zhang
Jiazhao Zhang
Peking University
Embodied AINavigation3D Vision
M
Minghan Li
Galbot
Yunpeng Qi
Yunpeng Qi
University of Science and Technology of China
image/video compressionimage/video generationimage/video coding for machine
Z
Zhibo Chen
USTC
Z
Zhizheng Zhang
Galbot
H
He Wang
Peking University