🤖 AI Summary
This work investigates the sample complexity of agnostic multiclass PAC learning. Contrary to the conventional view that a single combinatorial dimension—such as the Natarajan dimension—suffices to characterize learnability, the paper establishes that the sample complexity is fundamentally governed by *both* the DS-dimension and the Natarajan dimension, acting jointly and non-redundantly. To achieve this, the authors devise a novel online label-space reduction algorithm based on adaptive multiplicative weights updates, circumventing traditional techniques reliant on uniform convergence or reductions from the realizable setting. Their analysis yields an upper bound of $ ilde{O}(DS^{3/2}/varepsilon + mathrm{Nat}/varepsilon^2)$, which is nearly tight in the small-error regime and matches known lower bounds. This is the first result to demonstrate the *dual necessity* of both dimensions in agnostic multiclass learning, providing a refined structural characterization of multiclass learnability beyond single-dimensional measures.
📝 Abstract
The fundamental theorem of statistical learning states that binary PAC learning is governed by a single parameter -- the Vapnik-Chervonenkis (VC) dimension -- which determines both learnability and sample complexity. Extending this to multiclass classification has long been challenging, since Natarajan's work in the late 80s proposing the Natarajan dimension (Nat) as a natural analogue of VC. Daniely and Shalev-Shwartz (2014) introduced the DS dimension, later shown by Brukhim et al. (2022) to characterize multiclass learnability. Brukhim et al. also showed that Nat and DS can diverge arbitrarily, suggesting that multiclass learning is governed by DS rather than Nat. We show that agnostic multiclass PAC sample complexity is in fact governed by two distinct dimensions. Specifically, we prove nearly tight agnostic sample complexity bounds that, up to log factors, take the form $frac{DS^{1.5}}ε + frac{Nat}{ε^2}$ where $ε$ is the excess risk. This bound is tight up to a $sqrt{DS}$ factor in the first term, nearly matching known $Nat/ε^2$ and $DS/ε$ lower bounds. The first term reflects the DS-controlled regime, while the second shows that the Natarajan dimension still dictates asymptotic behavior for small $ε$. Thus, unlike binary or online classification -- where a single dimension (VC or Littlestone) controls both phenomena -- multiclass learning inherently involves two structural parameters. Our technical approach departs from traditional agnostic learning methods based on uniform convergence or reductions to realizable cases. A key ingredient is a novel online procedure based on a self-adaptive multiplicative-weights algorithm performing a label-space reduction, which may be of independent interest.