🤖 AI Summary
This paper addresses the limitation of the classical Tsybakov low-noise condition—its excessive restrictiveness and narrow applicability in classification. To overcome this, we propose a novel “model-margin noise” (MM-noise) assumption, whose noise level depends on the discrepancy between the hypothesis class and the Bayes classifier, rather than on intrinsic boundary properties of the data distribution; thus, MM-noise is strictly weaker than the Tsybakov condition. Under this more general noise setting, we establish, for the first time, an enhanced $mathcal{H}$-consistency bound that unifies binary and multiclass classification and yields a smooth transition—from linear to square-root convergence rates—as the noise parameter varies. Our analysis operates over a broad surrogate loss family, including logistic loss, hinge loss (SVM), and cross-entropy loss. For identical noise exponents, our bound improves upon existing results; its tightness and practicality are confirmed via explicit analytical expressions and comparative tabulation.
📝 Abstract
We introduce a new low-noise condition for classification, the Model Margin Noise (MM noise) assumption, and derive enhanced $mathcal{H}$-consistency bounds under this condition. MM noise is weaker than Tsybakov noise condition: it is implied by Tsybakov noise condition but can hold even when Tsybakov fails, because it depends on the discrepancy between a given hypothesis and the Bayes-classifier rather than on the intrinsic distributional minimal margin (see Figure 1 for an illustration of an explicit example). This hypothesis-dependent assumption yields enhanced $mathcal{H}$-consistency bounds for both binary and multi-class classification. Our results extend the enhanced $mathcal{H}$-consistency bounds of Mao, Mohri, and Zhong (2025a) with the same favorable exponents but under a weaker assumption than the Tsybakov noise condition; they interpolate smoothly between linear and square-root regimes for intermediate noise levels. We also instantiate these bounds for common surrogate loss families and provide illustrative tables.