UAV-Track VLA: Embodied Aerial Tracking via Vision-Language-Action Models

πŸ“… 2026-04-02
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the challenges of semantic understanding and real-time control in embodied visual tracking for drones operating in dynamic urban environments. To this end, the authors introduce a large-scale multimodal tracking dataset and benchmark comprising 890,000 frames across 176 tasks. They propose an efficient Vision-Language-Action tracking model based on the Ο€β‚€.β‚… architecture, featuring a temporal compression network to reduce redundancy, a dual-branch decoder for cross-modal feature disentanglement, and a parallel structure combining a spatial-aware localization head with a flow-matching action expert to enhance geometric prior modeling and zero-shot generalization. Evaluated on the CARLA simulation platform, the model achieves a success rate of 61.76% on long-range pedestrian tracking tasks, maintains tracking for an average of 269.65 frames, and attains an inference latency of only 0.0571 secondsβ€”33.4% faster than the baseline.
πŸ“ Abstract
Embodied visual tracking is crucial for Unmanned Aerial Vehicles (UAVs) executing complex real-world tasks. In dynamic urban scenarios with complex semantic requirements, Vision-Language-Action (VLA) models show great promise due to their cross-modal fusion and continuous action generation capabilities. To benchmark multimodal tracking in such environments, we construct a dedicated evaluation benchmark and a large-scale dataset encompassing over 890K frames, 176 tasks, and 85 diverse objects. Furthermore, to address temporal feature redundancy and the lack of spatial geometric priors in existing VLA models, we propose an improved VLA tracking model, UAV-Track VLA. Built upon the $Ο€_{0.5}$ architecture, our model introduces a temporal compression net to efficiently capture inter-frame dynamics. Additionally, a parallel dual-branch decoder comprising a spatial-aware auxiliary grounding head and a flow matching action expert is designed to decouple cross-modal features and generate fine-grained continuous actions. Systematic experiments in the CARLA simulator validate the superior end-to-end performance of our method. Notably, in challenging long-distance pedestrian tracking tasks, UAV-Track VLA achieves a 61.76\% success rate and 269.65 average tracking frames, significantly outperforming existing baselines. Furthermore, it demonstrates robust zero-shot generalization in unseen environments and reduces single-step inference latency by 33.4\% (to 0.0571s) compared to the original $Ο€_{0.5}$, enabling highly efficient, real-time UAV control. Data samples and demonstration videos are available at: https://github.com/Hub-Tian/UAV-Track\_VLA.
Problem

Research questions and friction points this paper is trying to address.

Embodied visual tracking
Unmanned Aerial Vehicles
Vision-Language-Action models
Multimodal tracking
Temporal feature redundancy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Vision-Language-Action (VLA)
temporal compression
spatial-aware grounding
embodied aerial tracking
zero-shot generalization
πŸ”Ž Similar Papers
No similar papers found.
Q
Qiyao Zhang
Beijing Institute of Technology, Beijing, China
S
Shuhua Zheng
Beijing Institute of Technology, Beijing, China
J
Jianli Sun
Institute of Automation, Chinese Academy of Sciences, Beijing, China
C
Chengxiang Li
University of Sanya, Sanya, Hainan, China
X
Xianke Wu
Beijing University of Posts and Telecommunications, Beijing, China
Z
Zihan Song
Hunan University, Changsha, Hunan, China
Zhiyong Cui
Zhiyong Cui
Professor, Beihang University
Foundation ModelsAutonomous DrivingUrban ComputingTraffic PredictionTraffic Control
Yisheng Lv
Yisheng Lv
The University of Chinese Academy of Sciences, and Chinese Academy of Sciences
Parallel IntelligenceAI for TransportationAutonomous VehiclesParallel Transportation Systems
Yonglin Tian
Yonglin Tian
Institute of Automation, Chinese Academy of Sciences
Parallel intelligenceParallel umanned systemsIntelligent vehiclesAutonomous driving