🤖 AI Summary
In high-dimensional linear control tasks under massive non-stationary noise, identifying critical signals and performing accurate credit assignment remain challenging. Method: This paper proposes Swift-Sarsa—a novel temporal-difference-based policy reinforcement learning algorithm that integrates SwiftTD’s step-size adaptive optimization and learning-rate upper-bound constraint into the Sarsa framework. It introduces an operational classical conditioning benchmark task, enabling automatic identification of effective signals and suppression of noise in billion-scale sparse feature spaces. Contribution/Results: Theoretically grounded and empirically validated, Swift-Sarsa achieves robust credit assignment without prior knowledge, significantly reduces prediction error, and maintains performance scalability under large-scale features. It establishes a new paradigm for online learning in high-dimensional, non-stationary environments.
📝 Abstract
Javed, Sharifnassab, and Sutton (2024) introduced a new algorithm for TD learning -- SwiftTD -- that augments True Online TD($λ$) with step-size optimization, a bound on the effective learning rate, and step-size decay. In their experiments SwiftTD outperformed True Online TD($λ$) and TD($λ$) on a variety of prediction tasks derived from Atari games, and its performance was robust to the choice of hyper-parameters. In this extended abstract we extend SwiftTD to work for control problems. We combine the key ideas behind SwiftTD with True Online Sarsa($λ$) to develop an on-policy reinforcement learning algorithm called $ extit{Swift-Sarsa}$.
We propose a simple benchmark for linear on-policy control called the $ extit{operant conditioning benchmark}$. The key challenge in the operant conditioning benchmark is that a very small subset of input signals are relevant for decision making. The majority of the signals are noise sampled from a non-stationary distribution. To learn effectively, the agent must learn to differentiate between the relevant signals and the noisy signals, and minimize prediction errors by assigning credit to the weight parameters associated with the relevant signals.
Swift-Sarsa, when applied to the operant conditioning benchmark, learned to assign credit to the relevant signals without any prior knowledge of the structure of the problem. It opens the door for solution methods that learn representations by searching over hundreds of millions of features in parallel without performance degradation due to noisy or bad features.