🤖 AI Summary
Addressing the challenges of low detection accuracy and poor interpretability in anomaly detection for solar Hα observational data, this paper proposes a lightweight, rule-based, non-learning anomaly detection method. The method leverages physics-informed priors to design image feature extraction and multi-scale thresholding criteria, enabling user-defined anomaly classification rules and precise localization of anomalous regions with associated confidence scores. By integrating multi-temporal Hα observations from the Global Oscillation Network Group (GONG), it achieves transparent anomaly identification and quantitative visual assessment. Evaluated on a curated dataset of 2,000 samples, the method outperforms state-of-the-art approaches in accuracy while significantly enhancing interpretability, practicality, and domain adaptability in data quality control. It thus provides an efficient, reliable, and physically grounded tool for preprocessing solar physics observational data.
📝 Abstract
The plethora of space-borne and ground-based observatories has provided astrophysicists with an unprecedented volume of data, which can only be processed at scale using advanced computing algorithms. Consequently, ensuring the quality of data fed into machine learning (ML) models is critical. The H$α$ observations from the GONG network represent one such data stream, producing several observations per minute, 24/7, since 2010. In this study, we introduce a lightweight (non-ML) anomaly-detection algorithm, called H-Alpha Anomalyzer, designed to identify anomalous observations based on user-defined criteria. Unlike many black-box algorithms, our approach highlights exactly which regions triggered the anomaly flag and quantifies the corresponding anomaly likelihood. For our comparative analysis, we also created and released a dataset of 2,000 observations, equally divided between anomalous and non-anomalous cases. Our results demonstrate that the proposed model not only outperforms existing methods but also provides explainability, enabling qualitative evaluation by domain experts.