🤖 AI Summary
This work investigates the fundamental lower bound on the *minimum average regret* in batch universal prediction—i.e., the performance limit of a predictor granted access to a batch of training data. Methodologically, it integrates information-theoretic tools, conditional regret analysis, and batch learning modeling to derive rigorous theoretical bounds. The key contribution is the *Conditional Regret Capacity Theorem*, which establishes, for the first time, a precise characterization of the batch regret lower bound in terms of the conditional Rényi divergence and conditional Sibson mutual information. This unifies classical regret analysis within the frameworks of batch learning and generalized Rényi information measures. The bound is validated on the class of binary memoryless sources, yielding the first information-theoretically interpretable performance lower bound for batch universal prediction. The result significantly advances the foundational theory of universal prediction under batch settings.
📝 Abstract
We derive a conditional version of the classical regret-capacity theorem. This result can be used in universal prediction to find lower bounds on the minimal batch regret, which is a recently introduced generalization of the average regret, when batches of training data are available to the predictor. As an example, we apply this result to the class of binary memoryless sources. Finally, we generalize the theorem to Rényi information measures, revealing a deep connection between the conditional Rényi divergence and the conditional Sibson's mutual information.