Class Incremental Fault Diagnosis under Limited Fault Data via Supervised Contrastive Knowledge Distillation

📅 2025-01-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Addressing the three core challenges in class-incremental fault diagnosis—few-shot learning, long-tailed class distribution, and catastrophic forgetting—this paper proposes a continual learning framework tailored for industrial scenarios. Methodologically: (1) a supervised contrastive knowledge distillation mechanism is designed to enhance feature discriminability and cross-task generalization; (2) a gradient-sensitivity-based prioritized exemplar replay strategy is introduced to mitigate forgetting of previously learned classes; and (3) a random forest classifier replaces the conventional linear classification head to robustly handle severe class imbalance. Extensive experiments on multiple synthetic and real-world industrial datasets—with class imbalance ratios up to 1:50—demonstrate that the proposed method consistently outperforms existing state-of-the-art approaches. It achieves balanced improvements in accuracy for both novel and legacy classes, robustness under distribution shift, and scalability during incremental model expansion.

Technology Category

Application Category

📝 Abstract
Class-incremental fault diagnosis requires a model to adapt to new fault classes while retaining previous knowledge. However, limited research exists for imbalanced and long-tailed data. Extracting discriminative features from few-shot fault data is challenging, and adding new fault classes often demands costly model retraining. Moreover, incremental training of existing methods risks catastrophic forgetting, and severe class imbalance can bias the model's decisions toward normal classes. To tackle these issues, we introduce a Supervised Contrastive knowledge distiLlation for class Incremental Fault Diagnosis (SCLIFD) framework proposing supervised contrastive knowledge distillation for improved representation learning capability and less forgetting, a novel prioritized exemplar selection method for sample replay to alleviate catastrophic forgetting, and the Random Forest Classifier to address the class imbalance. Extensive experimentation on simulated and real-world industrial datasets across various imbalance ratios demonstrates the superiority of SCLIFD over existing approaches. Our code can be found at https://github.com/Zhang-Henry/SCLIFD_TII.
Problem

Research questions and friction points this paper is trying to address.

Incremental Fault Diagnosis
Imbalanced Dataset
Catastrophic Forgetting
Innovation

Methods, ideas, or system contributions that make the work stand out.

SCLIFD framework
Imbalanced data handling
Class-incremental fault diagnosis
🔎 Similar Papers
No similar papers found.
H
Hanrong Zhang
ZJU-UIUC Joint Institute at Zhejiang University, Haining 314400, China
Y
Yifei Yao
ZJU-UoE Institute at Zhejiang University, Haining 314400, China
Z
Zixuan Wang
College of Biomedical Engineering and Instrument Science at Zhejiang University, Hangzhou 310013, China
Jiayuan Su
Jiayuan Su
Zhejiang University
LLMPost-TrainingReasoning
Mengxuan Li
Mengxuan Li
zhejiang university
P
Peng Peng
ZJU-UIUC Joint Institute at Zhejiang University, Haining 314400, China
H
Hongwei Wang
ZJU-UIUC Joint Institute at Zhejiang University, Haining 314400, China