🤖 AI Summary
This study investigates the estimation error, regret bounds, and statistical inference properties of stochastic gradient descent (SGD) under time-dependent data, encompassing nonstationary, non-mixing time series and dependence structures induced by sequential decision-making. For martingale-type covariate and noise processes, the work proposes a novel SGD variant combined with a tapered decision region approximation, effectively handling unbounded covariates while circumventing the classical trade-off between estimation accuracy and regret. Through non-asymptotic tail bound analysis and asymptotic normality derivations, the method enables support recovery via aggregated statistics. The resulting algorithm achieves the statistically optimal convergence rate of \(O_p(1/\sqrt{t})\) with only \(O(d)\) storage and computational overhead, while maintaining sharp tail bounds over an infinite time horizon.
📝 Abstract
This work investigates the performance of the final iterate produced by stochastic gradient descent (SGD) under temporally dependent data. We consider two complementary sources of dependence: $(i)$ martingale-type dependence in both the covariate and noise processes, which accommodates non-stationary and non-mixing time series data, and $(ii)$ dependence induced by sequential decision making. Our formulation runs in parallel with classical notions of (local) stationarity and strong mixing, while neither framework fully subsumes the other. Remarkably, SGD is shown to automatically accommodate both independent and dependent information under a broad class of stepsize schedules and exploration rate schemes. Non-asymptotically, we show that SGD simultaneously achieves statistically optimal estimation error and regret, extending and improving existing results. In particular, our tail bounds remain sharp even for potentially infinite horizon $T=+\infty$. Asymptotically, the SGD iterates converge to a Gaussian distribution with only an $O_{\PP}(1/\sqrt{t})$ remainder, demonstrating that the supposed estimation-regret trade-off claimed in prior work can in fact be avoided. We further propose a new ``conic''approximation of the decision region that allows the covariates to have unbounded support. For online sparse regression, we develop a new SGD-based algorithm that uses only $d$ units of storage and requires $O(d)$ flops per iteration, achieving the long term statistical optimality. Intuitively, each incoming observation contributes to estimation accuracy, while aggregated summary statistics guide support recovery.