A New Dataset and Performance Benchmark for Real-time Spacecraft Segmentation in Onboard Flight Computers

📅 2025-07-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
The lack of high-quality, publicly available segmentation datasets for on-orbit spacecraft damage detection hinders progress in autonomous inspection. Method: We introduce SWiM (Spacecraft in the Wild and Motion), the first large-scale, high-fidelity spacecraft image segmentation dataset, comprising 64,000 annotated images synthesized by integrating realistic spacecraft CAD models with diverse natural and synthetic backgrounds, while incorporating physics-informed imaging noise and geometric distortion modeling. We establish a real-time inference benchmark tailored for resource-constrained onboard platforms and generate synthetic training data using NASA’s TTALOS pipeline. Efficient segmentation models are developed via fine-tuning YOLOv8 and YOLOv11. Contribution/Results: Under representative onboard hardware constraints, our method achieves a Dice score of 0.92, a Hausdorff distance of 0.69 pixels, and an inference latency of 0.5 seconds—demonstrating feasibility for real-time, autonomous spacecraft inspection.

Technology Category

Application Category

📝 Abstract
Spacecraft deployed in outer space are routinely subjected to various forms of damage due to exposure to hazardous environments. In addition, there are significant risks to the subsequent process of in-space repairs through human extravehicular activity or robotic manipulation, incurring substantial operational costs. Recent developments in image segmentation could enable the development of reliable and cost-effective autonomous inspection systems. While these models often require large amounts of training data to achieve satisfactory results, publicly available annotated spacecraft segmentation data are very scarce. Here, we present a new dataset of nearly 64k annotated spacecraft images that was created using real spacecraft models, superimposed on a mixture of real and synthetic backgrounds generated using NASA's TTALOS pipeline. To mimic camera distortions and noise in real-world image acquisition, we also added different types of noise and distortion to the images. Finally, we finetuned YOLOv8 and YOLOv11 segmentation models to generate performance benchmarks for the dataset under well-defined hardware and inference time constraints to mimic real-world image segmentation challenges for real-time onboard applications in space on NASA's inspector spacecraft. The resulting models, when tested under these constraints, achieved a Dice score of 0.92, Hausdorff distance of 0.69, and an inference time of about 0.5 second. The dataset and models for performance benchmark are available at https://github.com/RiceD2KLab/SWiM.
Problem

Research questions and friction points this paper is trying to address.

Lack of annotated spacecraft images for segmentation training
Need for real-time autonomous spacecraft inspection systems
Challenges in mimicking real-world space imaging conditions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Created 64k annotated spacecraft images dataset
Used real and synthetic backgrounds with NASA's TTALOS
Fine-tuned YOLOv8 and YOLOv11 for real-time segmentation
🔎 Similar Papers
No similar papers found.
J
Jeffrey Joan Sam
Department of Computer Science, Rice University, Houston TX, USA
J
Janhavi Sathe
Department of Computer Science, Rice University, Houston TX, USA
N
Nikhil Chigali
Department of Computer Science, Rice University, Houston TX, USA
Naman Gupta
Naman Gupta
Carnegie Mellon University
Distributed SystemsMachine LearningInternet Of ThingsWiFi
R
Radhey Ruparel
Department of Computer Science, Rice University, Houston TX, USA
Y
Yicheng Jiang
Department of Computer Science, Rice University, Houston TX, USA
J
Janmajay Singh
Department of Computer Science, Rice University, Houston TX, USA
J
James W. Berck
R5 Spacecraft Project, NASA, Houston TX, USA
A
Arko Barman
Data to Knowledge Lab, Rice University, Houston TX, USA