🤖 AI Summary
Convergence analysis of off-policy reinforcement learning algorithms—TD, FQI, and PFQI—under linear function approximation remains fragmented and overly reliant on strong assumptions such as feature linear independence.
Method: We propose a unified iterative framework that models all three algorithms as preconditioned iterative methods for solving the LSTD linear system, grounded in matrix splitting theory.
Contribution/Results: This work provides the first rigorous, unified convergence characterization for TD, FQI, and PFQI, eliminating reliance on restrictive feature assumptions. By introducing preconditioning, we reveal their intrinsic algorithmic relationships and refute erroneous implications (e.g., “TD convergence implies FQI convergence”). We derive a general, weakened convergence criterion based on spectral properties of preconditioned matrices, applicable even to linearly dependent features. Our analysis establishes a new theoretical foundation for stability analysis and algorithm design in off-policy RL.
📝 Abstract
Traditionally, TD and FQI are viewed as differing in the number of updates toward the target value function: TD makes one update, FQI makes an infinite number, and Partial Fitted Q-Iteration (PFQI) performs a finite number, such as the use of a target network in Deep Q-Networks (DQN) in the OPE setting. This perspective, however, fails to capture the convergence connections between these algorithms and may lead to incorrect conclusions, for example, that the convergence of TD implies the convergence of FQI. In this paper, we focus on linear value function approximation and offer a new perspective, unifying TD, FQI, and PFQI as the same iterative method for solving the Least Squares Temporal Difference (LSTD) system, but using different preconditioners and matrix splitting schemes. TD uses a constant preconditioner, FQI employs a data-feature adaptive preconditioner, and PFQI transitions between the two. Then, we reveal that in the context of linear function approximation, increasing the number of updates under the same target value function essentially represents a transition from using a constant preconditioner to data-feature adaptive preconditioner. This unifying perspective also simplifies the analyses of the convergence conditions for these algorithms and clarifies many issues. Consequently, we fully characterize the convergence of each algorithm without assuming specific properties of the chosen features (e.g., linear independence). We also examine how common assumptions about feature representations affect convergence, and discover new conditions on features that are important for convergence. These convergence conditions allow us to establish the convergence connections between these algorithms and to address important questions.