π€ AI Summary
This work addresses the challenge of achieving omnidirectional collision avoidance for drones in complex environments with obstacles from arbitrary directions. The authors propose Fly360, a two-stage perception-decision framework that leverages panoramic RGB images to generate depth maps and employs a lightweight policy network to directly output velocity commands in the body-fixed frame. This approach overcomes key limitations of conventional methods that rely on limited field-of-view sensors and struggle when the flight direction diverges from the droneβs heading. As the first systematic study on omnidirectional avoidance, this work establishes a benchmark comprising three representative tasks and introduces a fixed-random-yaw training strategy to enhance generalization. Experiments demonstrate that Fly360 achieves robust omnidirectional collision avoidance in both simulation and real-world settings, significantly outperforming baseline approaches constrained to forward-facing sensing.
π Abstract
Obstacle avoidance in unmanned aerial vehicles (UAVs), as a fundamental capability, has gained increasing attention with the growing focus on spatial intelligence. However, current obstacle-avoidance methods mainly depend on limited field-of-view sensors and are ill-suited for UAV scenarios which require full-spatial awareness when the movement direction differs from the UAV's heading. This limitation motivates us to explore omnidirectional obstacle avoidance for panoramic drones with full-view perception. We first study an under explored problem setting in which a UAV must generate collision-free motion in environments with obstacles from arbitrary directions, and then construct a benchmark that consists of three representative flight tasks. Based on such settings, we propose Fly360, a two-stage perception-decision pipeline with a fixed random-yaw training strategy. At the perception stage, panoramic RGB observations are input and converted into depth maps as a robust intermediate representation. For the policy network, it is lightweight and used to output body-frame velocity commands from depth inputs. Extensive simulation and real-world experiments demonstrate that Fly360 achieves stable omnidirectional obstacle avoidance and outperforms forward-view baselines across all tasks. Our model is available at https://zxkai.github.io/fly360/