From Average Sensitivity to Small-Loss Regret Bounds under Random-Order Model

📅 2026-02-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes a novel approach for online learning under random order that achieves small-loss regret bounds relative to the optimal offline loss $\text{OPT}_T$, without requiring regularity assumptions such as smoothness of the loss function. By establishing a connection between the average sensitivity of offline algorithms and their approximation accuracy $\varepsilon$, and leveraging conjugate function analysis, the study bridges average sensitivity with small-loss regret theory for the first time. An AdaGrad-style adaptive tuning strategy is introduced to select $\varepsilon$, and combined with $(1\pm\varepsilon)$-cut sparsification and a batch-to-online conversion framework, the method extends to non-smooth settings. The resulting algorithm yields improved regret bounds for tasks including online $k$-means clustering, low-rank approximation, regression, and submodular minimization, achieving a regret of $\tilde{O}(n^{3/4}(1 + \text{OPT}_T^{3/4}))$ for the latter.

Technology Category

Application Category

📝 Abstract
We study online learning in the random-order model, where the multiset of loss functions is chosen adversarially but revealed in a uniformly random order. Building on the batch-to-online conversion by Dong and Yoshida (2023), we show that if an offline algorithm admits a $(1+\varepsilon)$-approximation guarantee and the effect of $\varepsilon$ on its average sensitivity is characterized by a function $\varphi(\varepsilon)$, then an adaptive choice of $\varepsilon$ yields a small-loss regret bound of $\tilde O(\varphi^{\star}(\mathrm{OPT}_T))$, where $\varphi^{\star}$ is the concave conjugate of $\varphi$, $\mathrm{OPT}_T$ is the offline optimum over $T$ rounds, and $\tilde O$ hides polylogarithmic factors in $T$. Our method requires no regularity assumptions on loss functions, such as smoothness, and can be viewed as a generalization of the AdaGrad-style tuning applied to the approximation parameter $\varepsilon$. Our result recovers and strengthens the $(1+\varepsilon)$-approximate regret bounds of Dong and Yoshida (2023) and yields small-loss regret bounds for online $k$-means clustering, low-rank approximation, and regression. We further apply our framework to online submodular function minimization using $(1\pm\varepsilon)$-cut sparsifiers of submodular hypergraphs, obtaining a small-loss regret bound of $\tilde O(n^{3/4}(1 + \mathrm{OPT}_T^{3/4}))$, where $n$ is the ground-set size. Our approach sheds light on the power of sparsification and related techniques in establishing small-loss regret bounds in the random-order model.
Problem

Research questions and friction points this paper is trying to address.

online learning
random-order model
small-loss regret
approximation algorithms
average sensitivity
Innovation

Methods, ideas, or system contributions that make the work stand out.

average sensitivity
small-loss regret
random-order model
adaptive approximation
submodular sparsification
🔎 Similar Papers
No similar papers found.