π€ AI Summary
To address the heavy reliance on large-scale manual annotations for high-accuracy semantic segmentation in photovoltaic (PV) electroluminescence (EL) image defect detection, this paper proposes PV-S3, a semi-supervised framework built upon a U-Net backbone. PV-S3 integrates consistency regularization, dynamic pseudo-label optimization, and an adaptive thresholding mechanism, and introduces a novel semi-cross-entropy loss to mitigate extreme class imbalance between defective and non-defective regions. Evaluated on the UCF-EL dataset, PV-S3 achieves 9.7% higher IoU and 20.42% higher F1-score compared to fully supervised baselines when trained with only 20% labeled dataβreducing annotation cost by 80% while surpassing state-of-the-art fully supervised methods. This work presents the first empirical demonstration that semi-supervised learning can outperform fully supervised learning in PV defect segmentation, establishing a new paradigm for low-cost, robust intelligent diagnosis of EL images.
π Abstract
Photovoltaic (PV) systems allow us to tap into all abundant solar energy, however they require regular maintenance for high efficiency and to prevent degradation. Traditional manual health check, using Electroluminescence (EL) imaging, is expensive and logistically challenging which makes automated defect detection essential. Current automation approaches require extensive manual expert labeling, which is time-consuming, expensive, and prone to errors. We propose PV-S3 (Photovoltaic-Semi Supervised Segmentation), a Semi-Supervised Learning approach for semantic segmentation of defects in EL images that reduces reliance on extensive labeling. PV-S3 is a Deep learning model trained using a few labeled images along with numerous unlabeled images. We introduce a novel Semi Cross-Entropy loss function to deal with class imbalance. We evaluate PV-S3 on multiple datasets and demonstrate its effectiveness and adaptability. With merely 20% labeled samples, we achieve an absolute improvement of 9.7% in IoU, 13.5% in Precision, 29.15% in Recall, and 20.42% in F1-Score over prior state-of-the-art supervised method (which uses 100% labeled samples) on UCF-EL dataset (largest dataset available for semantic segmentation of EL images) showing improvement in performance while reducing the annotation costs by 80%. For more details, visit our GitHub repository:https://github.com/abj247/PV-S3.