Learning Occlusion-Robust Vision Transformers for Real-Time UAV Tracking

📅 2025-04-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address frequent target occlusions caused by buildings, trees, and other obstacles in real-time UAV tracking, this paper proposes ORTrack, an occlusion-robust visual Transformer tracking framework. Methodologically: (1) We introduce a novel spatial Cox process-based stochastic occlusion simulation mechanism, coupled with random spatial masking, to enhance the occlusion invariance of ViT features; (2) We design task-difficulty-aware feature distillation (AFKD) to jointly optimize accuracy and efficiency, yielding a lightweight variant, ORTrack-D. Experiments demonstrate that ORTrack achieves state-of-the-art performance across multiple UAV tracking benchmarks. ORTrack-D maintains high accuracy while significantly accelerating inference—enabling real-time operation. The source code is publicly available.

Technology Category

Application Category

📝 Abstract
Single-stream architectures using Vision Transformer (ViT) backbones show great potential for real-time UAV tracking recently. However, frequent occlusions from obstacles like buildings and trees expose a major drawback: these models often lack strategies to handle occlusions effectively. New methods are needed to enhance the occlusion resilience of single-stream ViT models in aerial tracking. In this work, we propose to learn Occlusion-Robust Representations (ORR) based on ViTs for UAV tracking by enforcing an invariance of the feature representation of a target with respect to random masking operations modeled by a spatial Cox process. Hopefully, this random masking approximately simulates target occlusions, thereby enabling us to learn ViTs that are robust to target occlusion for UAV tracking. This framework is termed ORTrack. Additionally, to facilitate real-time applications, we propose an Adaptive Feature-Based Knowledge Distillation (AFKD) method to create a more compact tracker, which adaptively mimics the behavior of the teacher model ORTrack according to the task's difficulty. This student model, dubbed ORTrack-D, retains much of ORTrack's performance while offering higher efficiency. Extensive experiments on multiple benchmarks validate the effectiveness of our method, demonstrating its state-of-the-art performance. Codes is available at https://github.com/wuyou3474/ORTrack.
Problem

Research questions and friction points this paper is trying to address.

Enhancing occlusion resilience in UAV tracking models
Developing real-time ViT-based trackers with occlusion robustness
Creating compact trackers via adaptive knowledge distillation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Occlusion-Robust Representations using ViTs
Random masking simulates target occlusions
Adaptive Feature-Based Knowledge Distillation
Y
You Wu
College of Computer Science and Engineering, Guilin University of Technology, China
X
Xucheng Wang
School of Computer Science, Fudan University, Shanghai, China
X
Xiangyang Yang
College of Computer Science and Engineering, Guilin University of Technology, China
M
Mengyuan Liu
College of Computer Science and Engineering, Guilin University of Technology, China
Dan Zeng
Dan Zeng
Sun Yat-sen University
Biometricscomputer visiondeep learning
H
Hengzhou Ye
College of Computer Science and Engineering, Guilin University of Technology, China
Shuiwang Li
Shuiwang Li
Guilin University of Technology