🤖 AI Summary
Existing Stochastic-Adversarial (SEA) optimization models rely on prior knowledge of problem-specific parameters—such as domain diameter (D) and loss Lipschitz constant (G)—severely limiting practical applicability.
Method: We propose the first fully parameter-free online optimization algorithm, operating in stochastic-adversarial hybrid environments without any prior information. It simultaneously estimates the comparator vector, domain scale, and gradient norm via an Optimistic Online Newton Step (OONS) framework, augmented with adaptive regularization and variance-aware updates.
Contribution/Results: We establish a tight expected regret bound ( ilde{O}(|u|_2^2 + |u|_2(sqrt{sigma^2_{1:T}} + sqrt{Sigma^2_{1:T}}))) without assuming knowledge of (G) or (D), where (sigma^2_{1:T}) and (Sigma^2_{1:T}) denote cumulative stochastic and adversarial variances. This is the first result achieving joint adaptivity to all three critical parameters—comparator norm, domain size, and gradient magnitude—thereby significantly enhancing the practicality of SEA models in dynamic real-world settings.
📝 Abstract
We develop the first parameter-free algorithms for the Stochastically Extended Adversarial (SEA) model, a framework that bridges adversarial and stochastic online convex optimization. Existing approaches for the SEA model require prior knowledge of problem-specific parameters, such as the diameter of the domain $D$ and the Lipschitz constant of the loss functions $G$, which limits their practical applicability. Addressing this, we develop parameter-free methods by leveraging the Optimistic Online Newton Step (OONS) algorithm to eliminate the need for these parameters. We first establish a comparator-adaptive algorithm for the scenario with unknown domain diameter but known Lipschitz constant, achieving an expected regret bound of $ ilde{O}ig(|u|_2^2 + |u|_2(sqrt{σ^2_{1:T}} + sqrt{Σ^2_{1:T}})ig)$, where $u$ is the comparator vector and $σ^2_{1:T}$ and $Σ^2_{1:T}$ represent the cumulative stochastic variance and cumulative adversarial variation, respectively. We then extend this to the more general setting where both $D$ and $G$ are unknown, attaining the comparator- and Lipschitz-adaptive algorithm. Notably, the regret bound exhibits the same dependence on $σ^2_{1:T}$ and $Σ^2_{1:T}$, demonstrating the efficacy of our proposed methods even when both parameters are unknown in the SEA model.