Aerial World Model for Long-horizon Visual Generation and Navigation in 3D Space

πŸ“… 2025-12-26
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing UAV navigation methods primarily focus on low-level control (e.g., obstacle avoidance) and lack semantic awareness and long-horizon planning capabilities. This paper proposes a 3D vision generative and navigation world model tailored for UAVs, which enables semantic-driven trajectory evaluation and autonomous navigation by predicting multi-step egocentric future frames. Its core innovation is the physics-inspired Future Frame Projection (FFP) moduleβ€”the first to explicitly model the geometric mapping from 4-DoF UAV trajectories and 3D scenes to 2D observations, integrating temporal sequence modeling, differentiable camera projection, and implicit 3D scene representation. Evaluated on large-scale real-world environments, the model significantly improves long-range visual prediction accuracy and boosts end-to-end navigation success rate by 27.3%.

Technology Category

Application Category

πŸ“ Abstract
Unmanned aerial vehicles (UAVs) have emerged as powerful embodied agents. One of the core abilities is autonomous navigation in large-scale three-dimensional environments. Existing navigation policies, however, are typically optimized for low-level objectives such as obstacle avoidance and trajectory smoothness, lacking the ability to incorporate high-level semantics into planning. To bridge this gap, we propose ANWM, an aerial navigation world model that predicts future visual observations conditioned on past frames and actions, thereby enabling agents to rank candidate trajectories by their semantic plausibility and navigational utility. ANWM is trained on 4-DoF UAV trajectories and introduces a physics-inspired module: Future Frame Projection (FFP), which projects past frames into future viewpoints to provide coarse geometric priors. This module mitigates representational uncertainty in long-distance visual generation and captures the mapping between 3D trajectories and egocentric observations. Empirical results demonstrate that ANWM significantly outperforms existing world models in long-distance visual forecasting and improves UAV navigation success rates in large-scale environments.
Problem

Research questions and friction points this paper is trying to address.

Enables UAVs to incorporate high-level semantics into navigation planning
Predicts future visual observations for ranking candidate trajectories
Improves long-distance visual forecasting and navigation success rates
Innovation

Methods, ideas, or system contributions that make the work stand out.

Predicts future visual observations from past frames and actions
Uses Future Frame Projection for geometric priors in 3D space
Ranks trajectories by semantic plausibility and navigational utility
πŸ”Ž Similar Papers
No similar papers found.
Weichen Zhang
Weichen Zhang
PhD, University of Sydney
Computer VisionDeep LearningTransfer LearningDomain Adaptation
P
Peizhi Tang
Tsinghua University
X
Xin Zeng
Tsinghua University
Fanhang Man
Fanhang Man
Tsinghua University
OptimizationsMultimodal LLM
S
Shiquan Yu
Tsinghua University
Z
Zichao Dai
Tsinghua University
Baining Zhao
Baining Zhao
Tsinghua University
H
Hongjin Chen
Tsinghua University
Yu Shang
Yu Shang
Department of Electronic Engineering, Tsinghua University
Multimodal LearningLLM AgentRecommender System
W
Wei Wu
Tsinghua University
C
Chen Gao
Tsinghua University
X
Xinlei Chen
Tsinghua University
X
Xin Wang
Tsinghua University
Y
Yong Li
Tsinghua University
Wenwu Zhu
Wenwu Zhu
Professor, Computer Science, Tsinghua Univerisity
Multimedia ComputingNetwork Representation Learning