π€ AI Summary
Machine learning systems employing self-labeled data replay are prone toι·ε
₯ erroneous label βecho chambers,β leading to performance degradation due to repeated sampling of incorrect labels.
Method: This paper proposes the *Online Replay Learning* theoretical framework, formally characterizing the impact of iterative erroneous label sampling on learning dynamics. It introduces the *Extended Threshold Dimension* ExThD(β) as a precise measure of learnability, integrates adversarial label sequence modeling, analysis under random and adaptive adversaries, and robust closure-based learning.
Contributions/Results: We prove ExThD(β) is the tightest possible mistake upper bound and establish its equivalence to the class closure property. Crucially, we show that classical online algorithms may incur unbounded mistakes in this setting, whereas our closure-aware algorithm achieves a tight ExThD(β) mistake bound. The framework provides foundational guarantees for reliability in self-supervised and continual learning settings.
π Abstract
As machine learning systems increasingly train on self-annotated data, they risk reinforcing errors and becoming echo chambers of their own beliefs. We model this phenomenon by introducing a learning-theoretic framework: Online Learning in the Replay Setting. In round $t$, the learner outputs a hypothesis $hat{h}_t$; the adversary then reveals either the true label $f^ast(x_t)$ or a replayed label $hat{h}_i(x_t)$ from an earlier round $i < t$. A mistake is counted only when the true label is shown, yet classical algorithms such as the SOA or the halving algorithm are easily misled by the replayed errors.
We introduce the Extended Threshold dimension, $mathrm{ExThD}(mathcal{H})$, and prove matching upper and lower bounds that make $mathrm{ExThD}(mathcal{H})$ the exact measure of learnability in this model. A closure-based learner makes at most $mathrm{ExThD}(mathcal{H})$ mistakes against any adaptive adversary, and no algorithm can perform better. For stochastic adversaries, we prove a similar bound for every intersection-closed class. The replay setting is provably harder than the classical mistake bound setting: some classes have constant Littlestone dimension but arbitrarily large $mathrm{ExThD}(mathcal{H})$. Proper learning exhibits an even sharper separation: a class is properly learnable under replay if and only if it is (almost) intersection-closed. Otherwise, every proper learner suffers $Ξ©(T)$ errors, whereas our improper algorithm still achieves the $mathrm{ExThD}(mathcal{H})$ bound. These results give the first tight analysis of learning against replay adversaries, based on new results for closure-type algorithms.