🤖 AI Summary
Addressing the three core challenges in class-incremental fault diagnosis—few-shot learning, long-tailed class distribution, and catastrophic forgetting—this paper proposes a continual learning framework tailored for industrial scenarios. Methodologically: (1) a supervised contrastive knowledge distillation mechanism is designed to enhance feature discriminability and cross-task generalization; (2) a gradient-sensitivity-based prioritized exemplar replay strategy is introduced to mitigate forgetting of previously learned classes; and (3) a random forest classifier replaces the conventional linear classification head to robustly handle severe class imbalance. Extensive experiments on multiple synthetic and real-world industrial datasets—with class imbalance ratios up to 1:50—demonstrate that the proposed method consistently outperforms existing state-of-the-art approaches. It achieves balanced improvements in accuracy for both novel and legacy classes, robustness under distribution shift, and scalability during incremental model expansion.
📝 Abstract
Class-incremental fault diagnosis requires a model to adapt to new fault classes while retaining previous knowledge. However, limited research exists for imbalanced and long-tailed data. Extracting discriminative features from few-shot fault data is challenging, and adding new fault classes often demands costly model retraining. Moreover, incremental training of existing methods risks catastrophic forgetting, and severe class imbalance can bias the model's decisions toward normal classes. To tackle these issues, we introduce a Supervised Contrastive knowledge distiLlation for class Incremental Fault Diagnosis (SCLIFD) framework proposing supervised contrastive knowledge distillation for improved representation learning capability and less forgetting, a novel prioritized exemplar selection method for sample replay to alleviate catastrophic forgetting, and the Random Forest Classifier to address the class imbalance. Extensive experimentation on simulated and real-world industrial datasets across various imbalance ratios demonstrates the superiority of SCLIFD over existing approaches. Our code can be found at https://github.com/Zhang-Henry/SCLIFD_TII.