๐ค AI Summary
This paper investigates the regret bound for batch learning under logarithmic loss in the individual settingโwhere data are deterministic and no stochastic assumptions hold. It introduces a novel leave-one-out (LOO)-based regret criterion and establishes, for the first time, a general learnability theory for batch learning in this setting. Through minimax analysis and VC-dimension characterization, the paper derives a structural upper bound of $frac{d log N}{N}$ and a matching lower bound, revealing slower convergence than in the stochastic setting. Notably, on the $m$-symbol polynomial simplex, it achieves the tight minimax regret $frac{m-1}{N} + o(1/N)$. The approach integrates LOO cross-validation, information-theoretic techniques, and online learning tools, yielding a new theoretical framework and benchmark results for statistical learning theory under the individual setting.
๐ Abstract
We study batch learning with log-loss in the individual setting, where the outcome sequence is deterministic. Because empirical statistics are not directly applicable in this regime, obtaining regret guarantees for batch learning has long posed a fundamental challenge. We propose a natural criterion based on leave-one-out regret and analyze its minimax value for several hypothesis classes. For the multinomial simplex over $m$ symbols, we show that the minimax regret is $frac{m-1}{N} + o!left(frac{1}{N}
ight)$, and compare it to the stochastic realizable case where it is $frac{m-1}{2N} + o!left(frac{1}{N}
ight)$. More generally, we prove that every hypothesis class of VC dimension $d$ is learnable in the individual batch-learning problem, with regret at most $frac{dlog(N)}{N} + o!left(frac{log(N)}{N}
ight)$, and we establish matching lower bounds for certain classes. We further derive additional upper bounds that depend on structural properties of the hypothesis class. These results establish, for the first time, that universal batch learning with log-loss is possible in the individual setting.