Small Gradient Norm Regret for Online Convex Optimization

📅 2026-01-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitation of existing regret measures in online convex optimization—such as $L^*$-regret—which become overly coarse when the curvature of loss functions approaches zero. To this end, the paper introduces a novel problem-dependent regret measure, termed $G^*$-regret, defined as the sum of squared norms of cumulative gradients evaluated at the post-hoc optimal decision point, tailored for smooth loss functions. This new measure strictly refines $L^*$-regret, yielding tighter bounds in low-curvature regimes, and naturally extends to dynamic regret and weighted feedback settings. Through rigorous upper and lower bound analyses complemented by empirical validation, the study demonstrates that under interpolation conditions, $G^*$-regret more accurately characterizes algorithmic performance, thereby significantly advancing the understanding of convergence behavior in both online and stochastic optimization.

Technology Category

Application Category

📝 Abstract
This paper introduces a new problem-dependent regret measure for online convex optimization with smooth losses. The notion, which we call the $G^\star$ regret, depends on the cumulative squared gradient norm evaluated at the decision in hindsight. We show that the $G^\star$ regret strictly refines the existing $L^\star$ (small loss) regret, and that it can be arbitrarily sharper when the losses have vanishing curvature around the hindsight decision. We establish upper and lower bounds on the $G^\star$ regret and extend our results to dynamic regret and bandit settings. As a byproduct, we refine the existing convergence analysis of stochastic optimization algorithms in the interpolation regime. Some experiments validate our theoretical findings.
Problem

Research questions and friction points this paper is trying to address.

online convex optimization
regret minimization
smooth losses
gradient norm
interpolation regime
Innovation

Methods, ideas, or system contributions that make the work stand out.

G^star regret
online convex optimization
small gradient norm
dynamic regret
interpolation regime
🔎 Similar Papers
No similar papers found.