Fly360: Omnidirectional Obstacle Avoidance within Drone View

πŸ“… 2026-03-06
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the challenge of achieving omnidirectional collision avoidance for drones in complex environments with obstacles from arbitrary directions. The authors propose Fly360, a two-stage perception-decision framework that leverages panoramic RGB images to generate depth maps and employs a lightweight policy network to directly output velocity commands in the body-fixed frame. This approach overcomes key limitations of conventional methods that rely on limited field-of-view sensors and struggle when the flight direction diverges from the drone’s heading. As the first systematic study on omnidirectional avoidance, this work establishes a benchmark comprising three representative tasks and introduces a fixed-random-yaw training strategy to enhance generalization. Experiments demonstrate that Fly360 achieves robust omnidirectional collision avoidance in both simulation and real-world settings, significantly outperforming baseline approaches constrained to forward-facing sensing.

Technology Category

Application Category

πŸ“ Abstract
Obstacle avoidance in unmanned aerial vehicles (UAVs), as a fundamental capability, has gained increasing attention with the growing focus on spatial intelligence. However, current obstacle-avoidance methods mainly depend on limited field-of-view sensors and are ill-suited for UAV scenarios which require full-spatial awareness when the movement direction differs from the UAV's heading. This limitation motivates us to explore omnidirectional obstacle avoidance for panoramic drones with full-view perception. We first study an under explored problem setting in which a UAV must generate collision-free motion in environments with obstacles from arbitrary directions, and then construct a benchmark that consists of three representative flight tasks. Based on such settings, we propose Fly360, a two-stage perception-decision pipeline with a fixed random-yaw training strategy. At the perception stage, panoramic RGB observations are input and converted into depth maps as a robust intermediate representation. For the policy network, it is lightweight and used to output body-frame velocity commands from depth inputs. Extensive simulation and real-world experiments demonstrate that Fly360 achieves stable omnidirectional obstacle avoidance and outperforms forward-view baselines across all tasks. Our model is available at https://zxkai.github.io/fly360/
Problem

Research questions and friction points this paper is trying to address.

omnidirectional obstacle avoidance
UAV
full-view perception
spatial awareness
panoramic drone
Innovation

Methods, ideas, or system contributions that make the work stand out.

omnidirectional obstacle avoidance
panoramic perception
depth-based navigation
UAV spatial intelligence
random-yaw training
πŸ”Ž Similar Papers
No similar papers found.
X
Xiangkai Zhang
Institute of Automation Chinese Academy of Sciences, Beijing, China; Insta360 Research, Shenzhen, China
D
Dizhe Zhang
Insta360 Research, Shenzhen, China
W
WenZhuo Cao
Insta360 Research, Shenzhen, China
Zhaoliang Wan
Zhaoliang Wan
Insta360
Generalist Robot Autonomy
Y
Yingjie Niu
Insta360 Research, Shenzhen, China
Lu Qi
Lu Qi
Insta360 | Wuhan Univeristy
Computer VisionDeep Learning
Xu Yang
Xu Yang
Chinese Academy of Sciences
computer visionrobot visiongraph algorithm
Zhiyong Liu
Zhiyong Liu
Institute of Computing Technology, Chinese Academy of Sciences
Computer ArchitectureAlgorithmsParallel and Distributed ComputingBioinformatics