🤖 AI Summary
The lack of high-quality, publicly available segmentation datasets for on-orbit spacecraft damage detection hinders progress in autonomous inspection. Method: We introduce SWiM (Spacecraft in the Wild and Motion), the first large-scale, high-fidelity spacecraft image segmentation dataset, comprising 64,000 annotated images synthesized by integrating realistic spacecraft CAD models with diverse natural and synthetic backgrounds, while incorporating physics-informed imaging noise and geometric distortion modeling. We establish a real-time inference benchmark tailored for resource-constrained onboard platforms and generate synthetic training data using NASA’s TTALOS pipeline. Efficient segmentation models are developed via fine-tuning YOLOv8 and YOLOv11. Contribution/Results: Under representative onboard hardware constraints, our method achieves a Dice score of 0.92, a Hausdorff distance of 0.69 pixels, and an inference latency of 0.5 seconds—demonstrating feasibility for real-time, autonomous spacecraft inspection.
📝 Abstract
Spacecraft deployed in outer space are routinely subjected to various forms of damage due to exposure to hazardous environments. In addition, there are significant risks to the subsequent process of in-space repairs through human extravehicular activity or robotic manipulation, incurring substantial operational costs. Recent developments in image segmentation could enable the development of reliable and cost-effective autonomous inspection systems. While these models often require large amounts of training data to achieve satisfactory results, publicly available annotated spacecraft segmentation data are very scarce. Here, we present a new dataset of nearly 64k annotated spacecraft images that was created using real spacecraft models, superimposed on a mixture of real and synthetic backgrounds generated using NASA's TTALOS pipeline. To mimic camera distortions and noise in real-world image acquisition, we also added different types of noise and distortion to the images. Finally, we finetuned YOLOv8 and YOLOv11 segmentation models to generate performance benchmarks for the dataset under well-defined hardware and inference time constraints to mimic real-world image segmentation challenges for real-time onboard applications in space on NASA's inspector spacecraft. The resulting models, when tested under these constraints, achieved a Dice score of 0.92, Hausdorff distance of 0.69, and an inference time of about 0.5 second. The dataset and models for performance benchmark are available at https://github.com/RiceD2KLab/SWiM.