๐ค AI Summary
To address performance degradation of neural networks under label noise, this paper proposes a sample-level dynamic reweighting training framework based on Multiplicative Weights (MW) update. It is the first work to integrate the MW mechanism into deep learning optimization, providing theoretical convergence guarantees under gradient descent and analytically characterizing its intrinsic robustness to noisy labels in the 1D setting. Unlike existing approaches, the method requires no auxiliary networks or explicit noise modeling; instead, it adaptively adjusts sample importance via lightweight, iterative weight updates. Extensive experiments demonstrate significant improvements in classification accuracy on CIFAR-10/100 with synthetic label noise and on Clothing1M with real-world noisy labels. Moreover, the resulting models exhibit enhanced robustness against adversarial attacks. This work establishes a novel paradigm for noise-robust trainingโconcise, theoretically grounded, and computationally efficient.
๐ Abstract
Neural networks are widespread due to their powerful performance. Yet, they degrade in the presence of noisy labels at training time. Inspired by the setting of learning with expert advice, where multiplicative weights (MW) updates were recently shown to be robust to moderate data corruptions in expert advice, we propose to use MW for reweighting examples during neural networks optimization. We theoretically establish the convergence of our method when used with gradient descent and prove its advantages in 1d cases. We then validate empirically our findings for the general case by showing that MW improves neural networks'accuracy in the presence of label noise on CIFAR-10, CIFAR-100 and Clothing1M. We also show the impact of our approach on adversarial robustness.