🤖 AI Summary
This paper studies online inverse linear optimization: over $T$ rounds, dynamically estimating an agent’s hidden linear objective function from its (approximately) optimal actions on time-varying feasible sets. To address large cumulative regret in high-dimensional, long-horizon settings, we propose MetaGrad-ONS—a meta-algorithmic framework—achieving the first tight standard regret bound of $O(n log T)$, improving upon prior state-of-the-art by a factor of $Omega(n^3)$. We further design a robust variant resilient to suboptimal actions, attaining a robust regret bound of $O(n log T + sqrt{Delta_T n log T})$, where $Delta_T$ quantifies action suboptimality; we prove an $Omega(n)$ lower bound, establishing near-tightness. Notably, when dimension $n = 2$, the bound reduces to $O(1)$ constant regret—revealing that the fundamental bottleneck for tight high-dimensional bounds lies in dimensional coupling, not temporal scaling.
📝 Abstract
We study an online learning problem where, over $T$ rounds, a learner observes both time-varying sets of feasible actions and an agent's optimal actions, selected by solving linear optimization over the feasible actions. The learner sequentially makes predictions of the agent's underlying linear objective function, and their quality is measured by the regret, the cumulative gap between optimal objective values and those achieved by following the learner's predictions. A seminal work by B""armann et al. (ICML 2017) showed that online learning methods can be applied to this problem to achieve regret bounds of $O(sqrt{T})$. Recently, Besbes et al. (COLT 2021, Oper. Res. 2023) significantly improved the result by achieving an $O(n^4ln T)$ regret bound, where $n$ is the dimension of the ambient space of objective vectors. Their method, based on the ellipsoid method, runs in polynomial time but is inefficient for large $n$ and $T$. In this paper, we obtain an $O(nln T)$ regret bound, improving upon the previous bound of $O(n^4ln T)$ by a factor of $n^3$. Our method is simple and efficient: we apply the online Newton step (ONS) to appropriate exp-concave loss functions. Moreover, for the case where the agent's actions are possibly suboptimal, we establish an $O(nln T+sqrt{Delta_Tnln T})$ regret bound, where $Delta_T$ is the cumulative suboptimality of the agent's actions. This bound is achieved by using MetaGrad, which runs ONS with $Theta(ln T)$ different learning rates in parallel. We also provide a simple instance that implies an $Omega(n)$ lower bound, showing that our $O(nln T)$ bound is tight up to an $O(ln T)$ factor. This gives rise to a natural question: can the $O(ln T)$ factor in the upper bound be removed? For the special case of $n=2$, we show that an $O(1)$ regret bound is possible, while we delineate challenges in extending this result to higher dimensions.