CLID-MU: Cross-Layer Information Divergence Based Meta Update Strategy for Learning with Noisy Labels

📅 2025-07-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Meta-learning struggles with label noise when clean annotated meta-datasets are unavailable. Method: This paper proposes a clean-label-free meta-learning framework that leverages structural consistency discrepancies between the last hidden layer and output layer to quantify cross-layer information divergence for noise sample identification; it further introduces a meta-update mechanism grounded in structural alignment and consistency constraints to achieve noise-robust optimization. Contribution/Results: The method requires neither clean labels nor auxiliary metadata. Evaluated on CIFAR-10, CIFAR-100, and WebVision under diverse synthetic and real-world label noise settings, it consistently outperforms state-of-the-art approaches, demonstrating superior effectiveness, generalization, and robustness.

Technology Category

Application Category

📝 Abstract
Learning with noisy labels (LNL) is essential for training deep neural networks with imperfect data. Meta-learning approaches have achieved success by using a clean unbiased labeled set to train a robust model. However, this approach heavily depends on the availability of a clean labeled meta-dataset, which is difficult to obtain in practice. In this work, we thus tackle the challenge of meta-learning for noisy label scenarios without relying on a clean labeled dataset. Our approach leverages the data itself while bypassing the need for labels. Building on the insight that clean samples effectively preserve the consistency of related data structures across the last hidden and the final layer, whereas noisy samples disrupt this consistency, we design the Cross-layer Information Divergence-based Meta Update Strategy (CLID-MU). CLID-MU leverages the alignment of data structures across these diverse feature spaces to evaluate model performance and use this alignment to guide training. Experiments on benchmark datasets with varying amounts of labels under both synthetic and real-world noise demonstrate that CLID-MU outperforms state-of-the-art methods. The code is released at https://github.com/ruofanhu/CLID-MU.
Problem

Research questions and friction points this paper is trying to address.

Meta-learning without clean labeled dataset
Handling noisy labels in deep learning
Cross-layer consistency for robust training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses cross-layer information divergence
Evaluates model via feature space alignment
No need for clean labeled meta-dataset
🔎 Similar Papers
No similar papers found.