Forget Me Not: Fighting Local Overfitting with Knowledge Fusion and Distillation

📅 2025-07-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses “local overfitting”—a newly identified phenomenon in deep neural networks wherein models exhibit selective performance degradation and knowledge forgetting within specific data subspaces, rather than global overfitting. The authors formally define local overfitting and introduce a forgetfulness rate metric to quantify its severity. To mitigate it, they propose a multi-stage knowledge retrospection framework leveraging historical training checkpoints, integrating checkpoint ensembling with lightweight knowledge distillation—restoring lost knowledge without increasing inference cost. Evaluated across multiple datasets and architectures, the method significantly improves generalization, especially under label noise, outperforming baseline models and conventional ensembles while reducing both training and inference complexity. Key contributions include: (i) uncovering the mechanism of local forgetting; (ii) establishing a measurable, principled metric for local overfitting; and (iii) realizing a zero-overhead deployment paradigm for knowledge consolidation.

Technology Category

Application Category

📝 Abstract
Overfitting in deep neural networks occurs less frequently than expected. This is a puzzling observation, as theory predicts that greater model capacity should eventually lead to overfitting -- yet this is rarely seen in practice. But what if overfitting does occur, not globally, but in specific sub-regions of the data space? In this work, we introduce a novel score that measures the forgetting rate of deep models on validation data, capturing what we term local overfitting: a performance degradation confined to certain regions of the input space. We demonstrate that local overfitting can arise even without conventional overfitting, and is closely linked to the double descent phenomenon. Building on these insights, we introduce a two-stage approach that leverages the training history of a single model to recover and retain forgotten knowledge: first, by aggregating checkpoints into an ensemble, and then by distilling it into a single model of the original size, thus enhancing performance without added inference cost. Extensive experiments across multiple datasets, modern architectures, and training regimes validate the effectiveness of our approach. Notably, in the presence of label noise, our method -- Knowledge Fusion followed by Knowledge Distillation -- outperforms both the original model and independently trained ensembles, achieving a rare win-win scenario: reduced training and inference complexity.
Problem

Research questions and friction points this paper is trying to address.

Measure local overfitting via forgetting rate
Address performance degradation in data sub-regions
Enhance model performance without inference cost
Innovation

Methods, ideas, or system contributions that make the work stand out.

Measure local overfitting via forgetting rate
Aggregate checkpoints into ensemble model
Distill ensemble into single original-size model
🔎 Similar Papers
No similar papers found.
U
Uri Stern
School of Computer Science and Engineering, The Hebrew University of Jerusalem, Jerusalem 91904, Israel
E
Eli Corn
School of Computer Science and Engineering, The Hebrew University of Jerusalem, Jerusalem 91904, Israel
Daphna Weinshall
Daphna Weinshall
Professor of Computer Science, Hebrew University
computer visionmachine learningvisual perception