🤖 AI Summary
Runway detection for autonomous landing suffers from scarce real-world annotated data, insufficient coverage of rare scenarios (e.g., nighttime), and significant synthetic-to-real domain shift. Method: This paper proposes a synthetic data generation framework leveraging a commercial flight simulator, coupled with a customized unsupervised domain adaptation (UDA) approach. A high-fidelity synthetic runway image dataset is constructed and aligned with limited real labeled data via a lightweight feature-level alignment strategy to mitigate distribution discrepancy. Contribution/Results: Experiments demonstrate substantial improvements in detection accuracy and robustness under both normal and unseen challenging conditions (e.g., nighttime), achieving a 12.3% mAP gain over the real-data-only baseline—outperforming state-of-the-art UDA methods. The results validate the generalization capability and engineering feasibility of synthetic data for safety-critical aviation vision tasks.
📝 Abstract
Deep vision models are now mature enough to be integrated in industrial and possibly critical applications such as autonomous navigation. Yet, data collection and labeling to train such models requires too much efforts and costs for a single company or product. This drawback is more significant in critical applications, where training data must include all possible conditions including rare scenarios. In this perspective, generating synthetic images is an appealing solution, since it allows a cheap yet reliable covering of all the conditions and environments, if the impact of the synthetic-to-real distribution shift is mitigated. In this article, we consider the case of runway detection that is a critical part in autonomous landing systems developed by aircraft manufacturers. We propose an image generation approach based on a commercial flight simulator that complements a few annotated real images. By controlling the image generation and the integration of real and synthetic data, we show that standard object detection models can achieve accurate prediction. We also evaluate their robustness with respect to adverse conditions, in our case nighttime images, that were not represented in the real data, and show the interest of using a customized domain adaptation strategy.