🤖 AI Summary
Custom convolutional neural networks (CNNs) exhibit low efficiency and poor generalization in classifying astronomical alert images from wide-field time-domain surveys (e.g., ZTF).
Method: This work systematically investigates the feasibility of transferring vision pretraining paradigms to this domain. We adopt standard architectures (e.g., ResNet) and conduct supervised pretraining on both ImageNet and the Galaxy Zoo dataset, followed by transfer learning for alert classification.
Contribution/Results: We present the first empirical evidence that Galaxy Zoo pretraining substantially outperforms both ImageNet pretraining and random initialization—yielding a +3.2% average F1-score gain across multi-class alert identification. The resulting models match or exceed the accuracy of custom CNN baselines while accelerating inference by over 50% and significantly reducing GPU memory consumption. This study establishes domain-adapted pretraining as an effective strategy for time-domain astronomy data processing, advancing astronomical AI toward standardized, efficient, and scalable vision modeling.
📝 Abstract
Modern wide-field time-domain surveys facilitate the study of transient, variable and moving phenomena by conducting image differencing and relaying alerts to their communities. Machine learning tools have been used on data from these surveys and their precursors for more than a decade, and convolutional neural networks (CNNs), which make predictions directly from input images, saw particularly broad adoption through the 2010s. Since then, continually rapid advances in computer vision have transformed the standard practices around using such models. It is now commonplace to use standardized architectures pre-trained on large corpora of everyday images (e.g., ImageNet). In contrast, time-domain astronomy studies still typically design custom CNN architectures and train them from scratch. Here, we explore the affects of adopting various pre-training regimens and standardized model architectures on the performance of alert classification. We find that the resulting models match or outperform a custom, specialized CNN like what is typically used for filtering alerts. Moreover, our results show that pre-training on galaxy images from Galaxy Zoo tends to yield better performance than pre-training on ImageNet or training from scratch. We observe that the design of standardized architectures are much better optimized than the custom CNN baseline, requiring significantly less time and memory for inference despite having more trainable parameters. On the eve of the Legacy Survey of Space and Time and other image-differencing surveys, these findings advocate for a paradigm shift in the creation of vision models for alerts, demonstrating that greater performance and efficiency, in time and in data, can be achieved by adopting the latest practices from the computer vision field.