Regret minimization in Linear Bandits with offline data via extended D-optimal exploration

📅 2025-08-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses regret minimization in online linear bandits leveraging rich offline data, motivated by applications such as recommender systems and online advertising. We propose the Offline-Online Phased Elimination (OOPE) algorithm, which introduces an extended D-optimal design to quantify offline data quality for the first time, and integrates spectral analysis of the Gram matrix with Frank–Wolfe approximation to adaptively steer exploration. Theoretically, OOPE achieves an online regret bound of $ ilde{O}(sqrt{deff T log (|mathcal{A}|T)} + d^2)$, significantly improving over purely online methods when offline data is sufficiently informative. Moreover, we establish the first minimax lower bound that explicitly depends on offline data quality. Our core contribution lies in establishing a quantifiable, data-driven linkage between offline data quality and online performance, enabling regret adaptation grounded in empirical data characteristics.

Technology Category

Application Category

📝 Abstract
We consider the problem of online regret minimization in linear bandits with access to prior observations (offline data) from the underlying bandit model. There are numerous applications where extensive offline data is often available, such as in recommendation systems, online advertising. Consequently, this problem has been studied intensively in recent literature. Our algorithm, Offline-Online Phased Elimination (OOPE), effectively incorporates the offline data to substantially reduce the online regret compared to prior work. To leverage offline information prudently, OOPE uses an extended D-optimal design within each exploration phase. OOPE achieves an online regret is $ ilde{O}(sqrt{deff T log left(|mathcal{A}|T ight)}+d^2)$. $deff leq d)$ is the effective problem dimension which measures the number of poorly explored directions in offline data and depends on the eigen-spectrum $(λ_k)_{k in [d]}$ of the Gram matrix of the offline data. The eigen-spectrum $(λ_k)_{k in [d]}$ is a quantitative measure of the emph{quality} of offline data. If the offline data is poorly explored ($deff approx d$), we recover the established regret bounds for purely online setting while, when offline data is abundant ($Toff >> T$) and well-explored ($deff = o(1) $), the online regret reduces substantially. Additionally, we provide the first known minimax regret lower bounds in this setting that depend explicitly on the quality of the offline data. These lower bounds establish the optimality of our algorithm in regimes where offline data is either well-explored or poorly explored. Finally, by using a Frank-Wolfe approximation to the extended optimal design we further improve the $O(d^{2})$ term to $Oleft(frac{d^{2}}{deff} min { deff,1} ight)$, which can be substantial in high dimensions with moderate quality of offline data $deff = Ω(1)$.
Problem

Research questions and friction points this paper is trying to address.

Minimize online regret in linear bandits using offline data
Improve exploration efficiency via extended D-optimal design
Quantify offline data quality impact on regret bounds
Innovation

Methods, ideas, or system contributions that make the work stand out.

Extended D-optimal design exploration
Offline-Online Phased Elimination algorithm
Frank-Wolfe approximation for optimal design
🔎 Similar Papers
No similar papers found.