🤖 AI Summary
This work addresses “local overfitting”—a newly identified phenomenon in deep neural networks wherein models exhibit selective performance degradation and knowledge forgetting within specific data subspaces, rather than global overfitting. The authors formally define local overfitting and introduce a forgetfulness rate metric to quantify its severity. To mitigate it, they propose a multi-stage knowledge retrospection framework leveraging historical training checkpoints, integrating checkpoint ensembling with lightweight knowledge distillation—restoring lost knowledge without increasing inference cost. Evaluated across multiple datasets and architectures, the method significantly improves generalization, especially under label noise, outperforming baseline models and conventional ensembles while reducing both training and inference complexity. Key contributions include: (i) uncovering the mechanism of local forgetting; (ii) establishing a measurable, principled metric for local overfitting; and (iii) realizing a zero-overhead deployment paradigm for knowledge consolidation.
📝 Abstract
Overfitting in deep neural networks occurs less frequently than expected. This is a puzzling observation, as theory predicts that greater model capacity should eventually lead to overfitting -- yet this is rarely seen in practice. But what if overfitting does occur, not globally, but in specific sub-regions of the data space? In this work, we introduce a novel score that measures the forgetting rate of deep models on validation data, capturing what we term local overfitting: a performance degradation confined to certain regions of the input space. We demonstrate that local overfitting can arise even without conventional overfitting, and is closely linked to the double descent phenomenon.
Building on these insights, we introduce a two-stage approach that leverages the training history of a single model to recover and retain forgotten knowledge: first, by aggregating checkpoints into an ensemble, and then by distilling it into a single model of the original size, thus enhancing performance without added inference cost.
Extensive experiments across multiple datasets, modern architectures, and training regimes validate the effectiveness of our approach. Notably, in the presence of label noise, our method -- Knowledge Fusion followed by Knowledge Distillation -- outperforms both the original model and independently trained ensembles, achieving a rare win-win scenario: reduced training and inference complexity.