🤖 AI Summary
This paper investigates the minimax regret for sequential probability assignment under logarithmic loss, addressing both settings with and without side information. To characterize problem complexity, the authors introduce “sequential square-root entropy,” a novel geometric complexity measure. In the no-side-information setting, this measure yields the first tight upper bound on the Shtarkov sum. For the side-information setting, the analysis integrates Hellinger distance, covering numbers, and scale-sensitive dimensions to establish matching upper and lower regret bounds—up to logarithmic factors—and achieves optimal convergence rates for Donsker classes. Collectively, the work establishes a precise quantitative link between sequential prediction regret and the geometric structure of function classes, substantially advancing the fine-grained theory of regret bounds.
📝 Abstract
We study the problem of sequential probability assignment under logarithmic loss, both with and without side information. Our objective is to analyze the minimax regret -- a notion extensively studied in the literature -- in terms of geometric quantities, such as covering numbers and scale-sensitive dimensions. We show that the minimax regret for the case of no side information (equivalently, the Shtarkov sum) can be upper bounded in terms of sequential square-root entropy, a notion closely related to Hellinger distance. For the problem of sequential probability assignment with side information, we develop both upper and lower bounds based on the aforementioned entropy. The lower bound matches the upper bound, up to log factors, for classes in the Donsker regime (according to our definition of entropy).