RAPTOR: Real-Time High-Resolution UAV Video Prediction with Efficient Video Attention

📅 2025-12-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Addressing the trilemma of accuracy, perceptual quality, and low latency in real-time high-resolution video prediction for UAVs operating in dense urban environments, this paper proposes the first single-pass, end-to-end, patch-free prediction framework. Our method introduces three key innovations: (1) an Efficient Video Attention (EVA) module to capture long-range spatiotemporal dependencies with reduced memory footprint; (2) spatiotemporal-decoupled alternating factorization modeling, lowering computational complexity to O(S + T); and (3) a three-stage progressive training paradigm to stabilize optimization and enhance generalization. Evaluated on the Jetson AGX Orin edge platform, the framework achieves >30 FPS inference at 512² resolution. It attains state-of-the-art performance on UAVid and other benchmarks in PSNR, SSIM, and LPIPS. In real-world UAV navigation tasks, it improves success rate by 18% over prior methods.

Technology Category

Application Category

📝 Abstract
Video prediction is plagued by a fundamental trilemma: achieving high-resolution and perceptual quality typically comes at the cost of real-time speed, hindering its use in latency-critical applications. This challenge is most acute for autonomous UAVs in dense urban environments, where foreseeing events from high-resolution imagery is non-negotiable for safety. Existing methods, reliant on iterative generation (diffusion, autoregressive models) or quadratic-complexity attention, fail to meet these stringent demands on edge hardware. To break this long-standing trade-off, we introduce RAPTOR, a video prediction architecture that achieves real-time, high-resolution performance. RAPTOR's single-pass design avoids the error accumulation and latency of iterative approaches. Its core innovation is Efficient Video Attention (EVA), a novel translator module that factorizes spatiotemporal modeling. Instead of processing flattened spacetime tokens with $O((ST)^2)$ or $O(ST)$ complexity, EVA alternates operations along the spatial (S) and temporal (T) axes. This factorization reduces the time complexity to $O(S + T)$ and memory complexity to $O(max(S, T))$, enabling global context modeling at $512^2$ resolution and beyond, operating directly on dense feature maps with a patch-free design. Complementing this architecture is a 3-stage training curriculum that progressively refines predictions from coarse structure to sharp, temporally coherent details. Experiments show RAPTOR is the first predictor to exceed 30 FPS on a Jetson AGX Orin for $512^2$ video, setting a new state-of-the-art on UAVid, KTH, and a custom high-resolution dataset in PSNR, SSIM, and LPIPS. Critically, RAPTOR boosts the mission success rate in a real-world UAV navigation task by 18/%, paving the way for safer and more anticipatory embodied agents.
Problem

Research questions and friction points this paper is trying to address.

Achieves real-time high-resolution UAV video prediction
Reduces computational complexity for edge hardware deployment
Enhances safety in autonomous UAV navigation tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Single-pass design avoids iterative error accumulation
Efficient Video Attention factorizes spatiotemporal modeling complexity
3-stage training curriculum refines coarse to sharp details
🔎 Similar Papers
No similar papers found.
Zhan Chen
Zhan Chen
Georgia Southern University
Mathematical modeling in biology and scientific computing
Z
Zile Guo
Aerospace Information Research Institute, Chinese Academy of Sciences; Key Laboratory of Target Cognition and Application Technology (TCAT); School of Electronic, Electrical and Communication Engineering, University of Chinese Academy of Sciences
E
Enze Zhu
Aerospace Information Research Institute, Chinese Academy of Sciences; Key Laboratory of Target Cognition and Application Technology (TCAT)
P
Peirong Zhang
Aerospace Information Research Institute, Chinese Academy of Sciences; Key Laboratory of Target Cognition and Application Technology (TCAT); School of Electronic, Electrical and Communication Engineering, University of Chinese Academy of Sciences
X
Xiaoxuan Liu
Aerospace Information Research Institute, Chinese Academy of Sciences; Key Laboratory of Target Cognition and Application Technology (TCAT)
L
Lei Wang
Aerospace Information Research Institute, Chinese Academy of Sciences; Key Laboratory of Target Cognition and Application Technology (TCAT)
Yidan Zhang
Yidan Zhang
PhD Student, the Chinese University of Hong Kong, Shenzhen
computer visiondeep learning