🤖 AI Summary
We address the discounted discrete-time linear quadratic regulator (LQR) problem with unknown system parameters. Our method introduces the first reinforcement learning algorithm that avoids two-point gradient estimation and dispenses with strong stability assumptions—common limitations in prior work. It integrates single-point stochastic policy evaluation, system identification, and online policy optimization, leveraging Gaussian excitation and an adaptive exploration mechanism. Theoretically, we establish a function evaluation complexity of $widetilde{mathcal{O}}(1/varepsilon)$, breaking the existing $widetilde{mathcal{O}}(1/varepsilon^2)$ lower bound or restrictive stability requirements. Empirical evaluation on standard LQR benchmarks demonstrates faster convergence and enhanced robustness against model uncertainty. This work establishes a novel analytical paradigm for sample efficiency in model-free optimal control.
📝 Abstract
We provide the first known algorithm that provably achieves $varepsilon$-optimality within $widetilde{mathcal{O}}(1/varepsilon)$ function evaluations for the discounted discrete-time LQR problem with unknown parameters, without relying on two-point gradient estimates. These estimates are known to be unrealistic in many settings, as they depend on using the exact same initialization, which is to be selected randomly, for two different policies. Our results substantially improve upon the existing literature outside the realm of two-point gradient estimates, which either leads to $widetilde{mathcal{O}}(1/varepsilon^2)$ rates or heavily relies on stability assumptions.