A second order regret bound for NormalHedge

📅 2026-02-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of optimizing higher-order regret bounds in expert prediction by proposing an improved NormalHedge algorithm. By leveraging insights from continuous-time stochastic differential equations to guide algorithm design and combining them with self-concordant analysis techniques in discrete time, the paper establishes, for the first time, a second-order ε-quantile regret bound for NormalHedge. Under the condition that the cumulative loss variance satisfies \(V_T > \log N\), the resulting regret bound is \(O(\sqrt{V_T \log(V_T/\varepsilon)})\), which significantly improves upon conventional first-order bounds—particularly in high-variance regimes where the advantage becomes especially pronounced.

Technology Category

Application Category

📝 Abstract
We consider the problem of prediction with expert advice for ``easy''sequences. We show that a variant of NormalHedge enjoys a second-order $\epsilon$-quantile regret bound of $O\big(\sqrt{V_T \log(V_T/\epsilon)}\big) $ when $V_T>\log N$, where $V_T$ is the cumulative second moment of instantaneous per-expert regret averaged with respect to a natural distribution determined by the algorithm. The algorithm is motivated by a continuous time limit using Stochastic Differential Equations. The discrete time analysis uses self-concordance techniques.
Problem

Research questions and friction points this paper is trying to address.

prediction with expert advice
regret bound
second-order regret
easy sequences
NormalHedge
Innovation

Methods, ideas, or system contributions that make the work stand out.

second-order regret
NormalHedge
stochastic differential equations
self-concordance
prediction with expert advice
🔎 Similar Papers
No similar papers found.