đ¤ AI Summary
Under high renewable energy penetration, power system protection faces challenges including fragmented machine learning (ML) applications, inconsistent evaluation criteria, heterogeneous data quality, and insufficient real-world validationâhindering methodological comparability and engineering deployment. This study conducts a systematic literature review of over 100 works following the PRISMA framework. It introduces, for the first time, a task-oriented ML taxonomy and standardized terminology specifically for protection applications; proposes an ML-centric task classification scheme, a standardized reporting template, data documentation guidelines, and a transparent evaluation protocol. The analysis uncovers systemic gaps in robustness testing, deployment feasibility, and empirical validation across existing studies. The contributions significantly enhance reproducibility, methodological rigor, and cross-study comparability. Furthermore, they provide theoretical foundations and practical pathways for developing open benchmark datasets and realistic, application-oriented validation paradigms.
đ Abstract
The integration of renewable and distributed energy resources reshapes modern power systems, challenging conventional protection schemes. This scoping review synthesizes recent literature on machine learning (ML) applications in power system protection and disturbance management, following the PRISMA for Scoping Reviews framework. Based on over 100 publications, three key objectives are addressed: (i) assessing the scope of ML research in protection tasks; (ii) evaluating ML performance across diverse operational scenarios; and (iii) identifying methods suitable for evolving grid conditions. ML models often demonstrate high accuracy on simulated datasets; however, their performance under real-world conditions remains insufficiently validated. The existing literature is fragmented, with inconsistencies in methodological rigor, dataset quality, and evaluation metrics. This lack of standardization hampers the comparability of results and limits the generalizability of findings. To address these challenges, this review introduces a ML-oriented taxonomy for protection tasks, resolves key terminological inconsistencies, and advocates for standardized reporting practices. It further provides guidelines for comprehensive dataset documentation, methodological transparency, and consistent evaluation protocols, aiming to improve reproducibility and enhance the practical relevance of research outcomes. Critical gaps remain, including the scarcity of real-world validation, insufficient robustness testing, and limited consideration of deployment feasibility. Future research should prioritize public benchmark datasets, realistic validation methods, and advanced ML architectures. These steps are essential to move ML-based protection from theoretical promise to practical deployment in increasingly dynamic and decentralized power systems.