AION: Aerial Indoor Object-Goal Navigation Using Dual-Policy Reinforcement Learning

📅 2026-01-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of semantic object navigation for aerial robots in unknown indoor environments without external localization or global maps. To this end, we propose AION, an end-to-end dual-policy reinforcement learning framework that, for the first time, decouples exploration and goal-reaching behaviors into two specialized policies, enabling efficient and safe 3D navigation using only visual inputs. The method is evaluated in high-fidelity IsaacSim simulations and on the AI2-THOR benchmark, demonstrating significant improvements over existing approaches. AION achieves state-of-the-art performance across key metrics, including exploration coverage, navigation efficiency, and flight safety, highlighting its effectiveness in complex, unstructured indoor settings.

Technology Category

Application Category

📝 Abstract
Object-Goal Navigation (ObjectNav) requires an agent to autonomously explore an unknown environment and navigate toward target objects specified by a semantic label. While prior work has primarily studied zero-shot ObjectNav under 2D locomotion, extending it to aerial platforms with 3D locomotion capability remains underexplored. Aerial robots offer superior maneuverability and search efficiency, but they also introduce new challenges in spatial perception, dynamic control, and safety assurance. In this paper, we propose AION for vision-based aerial ObjectNav without relying on external localization or global maps. AION is an end-to-end dual-policy reinforcement learning (RL) framework that decouples exploration and goal-reaching behaviors into two specialized policies. We evaluate AION on the AI2-THOR benchmark and further assess its real-time performance in IsaacSim using high-fidelity drone models. Experimental results show that AION achieves superior performance across comprehensive evaluation metrics in exploration, navigation efficiency, and safety. The video can be found at https://youtu.be/TgsUm6bb7zg.
Problem

Research questions and friction points this paper is trying to address.

Aerial Object-Goal Navigation
3D Locomotion
Vision-based Navigation
Indoor Exploration
Autonomous Aerial Robots
Innovation

Methods, ideas, or system contributions that make the work stand out.

Aerial Object-Goal Navigation
Dual-Policy Reinforcement Learning
Vision-Based Navigation
3D Locomotion
End-to-End RL
Z
Zichen Yan
Department of Electrical and Computer Engineering, National University of Singapore, Singapore
Yuchen Hou
Yuchen Hou
PhD student in Computer Science at University of California, Santa Barbara
Machine LearningComputational NeuroscienceData Science
S
Shenao Wang
Department of Electrical and Computer Engineering, National University of Singapore, Singapore
Y
Yichao Gao
Department of Electrical and Computer Engineering, National University of Singapore, Singapore
R
Rui Huang
Department of Electrical and Computer Engineering, National University of Singapore, Singapore
Lin Zhao
Lin Zhao
Assistant Professor, National University of Singapore
control theoryreinforcement learningroboticsautonomous vehiclespower system