π€ AI Summary
To address the poor real-time performance and low tracking accuracy of traditional methods under complex sky-background conditions and dense space debris scenarios, this paper proposes SDT-Net, an end-to-end deep learning model for space debris tracking. We introduce the first sky-based observational modeling approach for synthetic data generation, enabling construction of SDTDβthe first large-scale space debris tracking dataset (18K videos, 250K synthetic targets). SDT-Net unifies detection and trajectory association via robust feature representation and end-to-end joint optimization. On SDTD, it significantly outperforms conventional approaches. Evaluated on real Antarctic observational data, it achieves a Multiple Object Tracking Accuracy (MOTA) of 70.6%, demonstrating strong generalization capability. Both source code and the SDTD dataset will be publicly released.
π Abstract
With the rapid development of space exploration, space debris has attracted more attention due to its potential extreme threat, leading to the need for real-time and accurate debris tracking. However, existing methods are mainly based on traditional signal processing, which cannot effectively process the complex background and dense space debris. In this paper, we propose a deep learning-based Space Debris Tracking Network~(SDT-Net) to achieve highly accurate debris tracking. SDT-Net effectively represents the feature of debris, enhancing the efficiency and stability of end-to-end model learning. To train and evaluate this model effectively, we also produce a large-scale dataset Space Debris Tracking Dataset (SDTD) by a novel observation-based data simulation scheme. SDTD contains 18,040 video sequences with a total of 62,562 frames and covers 250,000 synthetic space debris. Extensive experiments validate the effectiveness of our model and the challenging of our dataset. Furthermore, we test our model on real data from the Antarctic Station, achieving a MOTA score of 70.6%, which demonstrates its strong transferability to real-world scenarios. Our dataset and code will be released soon.