🤖 AI Summary
This paper addresses the problem of locally adaptive optimal estimation in adversarial online nonparametric regression without prior knowledge of smoothness. For general convex losses, we propose the first parameter-free algorithm: it constructs a dynamic local Hölder profile tracking mechanism via pruned chaining trees, enabling data-driven adaptation to unknown spatially varying smoothness. Unlike existing approaches requiring global or piecewise-constant Hölder exponents, our method imposes no such assumptions and achieves computationally efficient, fully data-dependent local tuning. Theoretically, it attains the minimax optimal regret bound over Hölder function classes globally, while preserving locally adaptive optimal rates even in regions of heterogeneous smoothness. Compared to methods relying on fixed smoothness assumptions, our approach significantly enhances robustness and practical applicability.
📝 Abstract
We study adversarial online nonparametric regression with general convex losses and propose a parameter-free learning algorithm that achieves minimax optimal rates. Our approach leverages chaining trees to compete against H{""o}lder functions and establishes optimal regret bounds. While competing with nonparametric function classes can be challenging, they often exhibit local patterns - such as local H{""o}lder continuity - that online algorithms can exploit. Without prior knowledge, our method dynamically tracks and adapts to different H{""o}lder profiles by pruning a core chaining tree structure, aligning itself with local smoothness variations. This leads to the first computationally efficient algorithm with locally adaptive optimal rates for online regression in an adversarial setting. Finally, we discuss how these notions could be extended to a boosting framework, offering promising directions for future research.