🤖 AI Summary
The absence of systematic evaluation criteria hinders informed selection of underwater robotics simulation platforms. Method: This study introduces the first multidimensional, unified evaluation framework to conduct a horizontal comparison of five open-source, ROS-compatible simulators—Stonefish, DAVE, HoloOcean, MARUS, and UNav-Sim—across core dimensions: sensor fidelity, environmental modeling accuracy, sim-to-real transfer capability, and research impact. Evaluation integrates ROS ecosystem compatibility analysis, multi-metric quantitative assessment, empirical transfer experiments, and open-science principles. Contribution/Results: The study uncovers fundamental trade-offs among platforms in physical modeling depth, architectural extensibility, and task-specific adaptability. It identifies an urgent need for standardized benchmarking protocols and delivers a reproducible, evidence-based platform selection guide. These outcomes provide both theoretical foundations and practical guidelines for developing high-fidelity, generalizable underwater simulation systems.
📝 Abstract
The increasing complexity of underwater robotic systems has led to a surge in simulation platforms designed to support perception, planning, and control tasks in marine environments. However, selecting the most appropriate underwater robotic simulator (URS) remains a challenge due to wide variations in fidelity, extensibility, and task suitability. This paper presents a comprehensive review and comparative analysis of five state-of-the-art, ROS-compatible, open-source URSs: Stonefish, DAVE, HoloOcean, MARUS, and UNav-Sim. Each simulator is evaluated across multiple criteria including sensor fidelity, environmental realism, sim-to-real capabilities, and research impact. We evaluate them across architectural design, sensor and physics modeling, task capabilities, and research impact. Additionally, we discuss ongoing challenges in sim-to-real transfer and highlight the need for standardization and benchmarking in the field. Our findings aim to guide practitioners in selecting effective simulation environments and inform future development of more robust and transferable URSs.