๐ค AI Summary
This paper studies the online sparse linear approximation problem: given a dynamically arriving sequence of measurements, the goal is to predict, in real time, the optimal sparse linear combination of columns from a fixed measurement matrix. Motivated by applications in clinical trials, web caching, and resource allocation, we propose Follow-The-Approximate-Sparse-Leader (FTASL), a novel meta-algorithm. FTASL is the first method to establish data-dependent sublinear static regret boundsโscaling between logarithmic and square-root rates. It integrates sparse projection, adaptive regularization, and an approximate leader framework, ensuring both computational tractability and rigorous theoretical guarantees. Under standard assumptions on sparsity and condition number, we prove that FTASL achieves low-regret growth. Empirical evaluations demonstrate that FTASL significantly outperforms existing online sparse learning baselines in both convergence speed and approximation accuracy.
๐ Abstract
We consider the problem of extit{online sparse linear approximation}, where one predicts the best sparse approximation of a sequence of measurements in terms of linear combination of columns of a given measurement matrix. Such online prediction problems are ubiquitous, ranging from medical trials to web caching to resource allocation. The inherent difficulty of offline recovery also makes the online problem challenging. In this letter, we propose Follow-The-Approximate-Sparse-Leader, an efficient online meta-policy to address this online problem. Through a detailed theoretical analysis, we prove that under certain assumptions on the measurement sequence, the proposed policy enjoys a data-dependent sublinear upper bound on the static regret, which can range from logarithmic to square-root. Numerical simulations are performed to corroborate the theoretical findings and demonstrate the efficacy of the proposed online policy.