🤖 AI Summary
UAV hardware testing faces significant challenges in safety, cost, and ecosystem compatibility, making high-fidelity simulation a critical alternative—yet systematic, evidence-based platform selection criteria remain lacking. Method: This study comprehensively surveys 44 UAV simulation platforms and conducts empirical, multi-dimensional evaluations of 14 leading candidates across physical engine accuracy, sensor model fidelity, ROS/Gazebo compatibility, extensibility, and ecosystem support. It further introduces the first standardized simulation platform selection framework, enabling quantitative alignment between platform capabilities and algorithmic requirements. Contribution/Results: The proposed full-stack comparative matrix fills a critical gap in systematic UAV simulation tool evaluation. Deployed by over 20 research teams, it has reduced hardware prototyping costs by >70% and significantly improved cross-platform algorithm migration success rates.
📝 Abstract
Uncrewed Aerial Vehicle (UAV) research faces challenges with safety, scalability, costs, and ecological impact when conducting hardware testing. High-fidelity simulators offer a vital solution by replicating real-world conditions to enable the development and evaluation of novel perception and control algorithms. However, the large number of available simulators poses a significant challenge for researchers to determine which simulator best suits their specific use-case, based on each simulator's limitations and customization readiness. In this paper we present an overview of 44 UAV simulators, including in-depth, systematic comparisons for 14 of the simulators. Additionally, we present a set of decision factors for selection of simulators, aiming to enhance the efficiency and safety of research endeavors.