🤖 AI Summary
To address performance degradation caused by label noise in supervised learning, this paper proposes a family of robust loss functions characterized by a Bounded Variation Ratio (BVR). Departing from conventional symmetry-based conditions, we introduce the variation ratio as a novel theoretical criterion for quantifying loss robustness and rigorously establish its intrinsic connection to noise tolerance. Based on this insight, we develop a concise and scalable framework for constructing asymmetric robust losses, unifying standard losses—including cross-entropy and mean absolute error—into BVR-bounded forms. Theoretical analysis proves that BVR-bounded losses satisfy sufficient conditions for noise robustness. Extensive experiments on CIFAR-10/100 and WebVision demonstrate that our approach significantly improves generalization accuracy and training stability under diverse noise settings, including symmetric, asymmetric, and instance-dependent label noise.
📝 Abstract
Mitigating the negative impact of noisy labels has been aperennial issue in supervised learning. Robust loss functions have emerged as a prevalent solution to this problem. In this work, we introduce the Variation Ratio as a novel property related to the robustness of loss functions, and propose a new family of robust loss functions, termed Variation-Bounded Loss (VBL), which is characterized by a bounded variation ratio. We provide theoretical analyses of the variation ratio, proving that a smaller variation ratio would lead to better robustness. Furthermore, we reveal that the variation ratio provides a feasible method to relax the symmetric condition and offers a more concise path to achieve the asymmetric condition. Based on the variation ratio, we reformulate several commonly used loss functions into a variation-bounded form for practical applications. Positive experiments on various datasets exhibit the effectiveness and flexibility of our approach.