Dynamic Regret Reduces to Kernelized Static Regret

📅 2025-07-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper studies dynamic regret minimization in online convex optimization, aiming to achieve low cumulative loss against arbitrary time-varying benchmark sequences. We propose a novel “dynamic-to-static” regret reduction framework based on reproducing kernel Hilbert spaces (RKHS): the original problem is mapped into a functional space, where an equivalent static regret problem is formulated in an infinite-dimensional RKHS. This approach transcends the linear loss restriction and, for the first time, supports general nonlinear convex losses. It yields a scale-free, direction-adaptive dynamic regret bound—recovering the optimal $Oig(sqrt{sum_{t=1}^T |u_t - u_{t-1}|^2},ig)$ rate in the linear case, and achieving $Oig(|u|^2 + d_{ ext{eff}}(lambda)log Tig)$ under scalable assumptions. The algorithm enjoys provable reproducibility guarantees and admits efficient implementation.

Technology Category

Application Category

📝 Abstract
We study dynamic regret in online convex optimization, where the objective is to achieve low cumulative loss relative to an arbitrary benchmark sequence. By observing that competing with an arbitrary sequence of comparators $u_{1},ldots,u_{T}$ in $mathcal{W}subseteqmathbb{R}^{d}$ is equivalent to competing with a fixed comparator function $u:[1,T] o mathcal{W}$, we frame dynamic regret minimization as a static regret problem in a function space. By carefully constructing a suitable function space in the form of a Reproducing Kernel Hilbert Space (RKHS), our reduction enables us to recover the optimal $R_{T}(u_{1},ldots,u_{T}) = mathcal{O}(sqrt{sum_{t}|u_{t}-u_{t-1}|T})$ dynamic regret guarantee in the setting of linear losses, and yields new scale-free and directionally-adaptive dynamic regret guarantees. Moreover, unlike prior dynamic-to-static reductions -- which are valid only for linear losses -- our reduction holds for any sequence of losses, allowing us to recover $mathcal{O}ig(|u|^2+d_{mathrm{eff}}(λ)ln Tig)$ bounds in exp-concave and improper linear regression settings, where $d_{mathrm{eff}}(λ)$ is a measure of complexity of the RKHS. Despite working in an infinite-dimensional space, the resulting reduction leads to algorithms that are computable in practice, due to the reproducing property of RKHSs.
Problem

Research questions and friction points this paper is trying to address.

Reducing dynamic regret to static regret in online optimization
Achieving optimal dynamic regret with RKHS function space
Extending dynamic-to-static reduction to non-linear loss sequences
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reduces dynamic regret to static regret via RKHS
Achieves optimal dynamic regret for linear losses
Enables practical algorithms in infinite-dimensional spaces
🔎 Similar Papers
No similar papers found.