ChronoSelect: Robust Learning with Noisy Labels via Dynamics Temporal Memory

📅 2025-07-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the degradation of deep neural network generalization caused by label noise in real-world data, this paper proposes ChronoSelect—a novel framework centered on a dynamic temporal memory mechanism. This mechanism employs a four-stage sliding update architecture to compactly compress prediction histories, integrated with dual-branch consistency verification and dynamic trajectory analysis for adaptive tri-partitioning of samples into clean, noisy, and boundary categories. We provide theoretical guarantees proving the convergence and stability of the proposed mechanism under label noise. Extensive experiments on diverse synthetic and real-world noisy benchmarks—including CIFAR-10/100-N, WebVision, and Clothing1M—demonstrate that ChronoSelect consistently outperforms state-of-the-art methods, achieving new SOTA performance. The framework significantly enhances model robustness and generalization capability in the presence of label corruption.

Technology Category

Application Category

📝 Abstract
Training deep neural networks on real-world datasets is often hampered by the presence of noisy labels, which can be memorized by over-parameterized models, leading to significant degradation in generalization performance. While existing methods for learning with noisy labels (LNL) have made considerable progress, they fundamentally suffer from static snapshot evaluations and fail to leverage the rich temporal dynamics of learning evolution. In this paper, we propose ChronoSelect (chrono denoting its temporal nature), a novel framework featuring an innovative four-stage memory architecture that compresses prediction history into compact temporal distributions. Our unique sliding update mechanism with controlled decay maintains only four dynamic memory units per sample, progressively emphasizing recent patterns while retaining essential historical knowledge. This enables precise three-way sample partitioning into clean, boundary, and noisy subsets through temporal trajectory analysis and dual-branch consistency. Theoretical guarantees prove the mechanism's convergence and stability under noisy conditions. Extensive experiments demonstrate ChronoSelect's state-of-the-art performance across synthetic and real-world benchmarks.
Problem

Research questions and friction points this paper is trying to address.

Addressing noisy labels in deep learning datasets
Overcoming static evaluation limitations in noisy label learning
Enhancing generalization via temporal dynamics in training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Four-stage memory architecture compresses prediction history
Sliding update mechanism with controlled decay
Temporal trajectory analysis for sample partitioning
🔎 Similar Papers
No similar papers found.
J
Jianchao Wang
Shenzhen Institute for Advanced Study, University of Electronic Science and Technology of China
Q
Qingfeng Li
School of Computer Science and Engineering, University of Electronic Science and Technology of China
P
Pengcheng Zheng
School of Computer Science and Engineering, University of Electronic Science and Technology of China
Xiaorong Pu
Xiaorong Pu
University of Electronic Science and Technology of China
medical image processing
Y
Yazhou Ren
School of Computer Science and Engineering, University of Electronic Science and Technology of China; Shenzhen Institute for Advanced Study, University of Electronic Science and Technology of China