🤖 AI Summary
This work addresses the lack of dynamic, interpretable quantification metrics for catastrophic forgetting (CF) in continual learning. We propose CPCF, a novel evaluation metric based on adaptive conformal prediction, which dynamically quantifies CF by monitoring real-time confidence shifts in model predictions on historical tasks—without requiring retraining, task-specific assumptions, or sacrificing interpretability. Its core innovation lies in the first integration of adaptive conformal prediction into CF assessment, unifying uncertainty calibration with continual learning evaluation. Experiments across four benchmark datasets demonstrate that CPCF exhibits strong correlation with historical task accuracy (average Spearman ρ > 0.92), significantly outperforming conventional static metrics. Moreover, CPCF shows high robustness to distribution shifts and practical utility for real-world deployment.
📝 Abstract
This work introduces a novel methodology for assessing catastrophic forgetting (CF) in continual learning. We propose a new conformal prediction (CP)-based metric, termed the Conformal Prediction Confidence Factor (CPCF), to quantify and evaluate CF effectively. Our framework leverages adaptive CP to estimate forgetting by monitoring the model's confidence on previously learned tasks. This approach provides a dynamic and practical solution for monitoring and measuring CF of previous tasks as new ones are introduced, offering greater suitability for real-world applications. Experimental results on four benchmark datasets demonstrate a strong correlation between CPCF and the accuracy of previous tasks, validating the reliability and interpretability of the proposed metric. Our results highlight the potential of CPCF as a robust and effective tool for assessing and understanding CF in dynamic learning environments.