🤖 AI Summary
This paper addresses the lack of rigorous theoretical analysis for KL-regularized reinforcement learning (RL). We establish, for the first time, a logarithmic regret upper bound without requiring strong coverage assumptions. Methodologically, we formulate an online contextual bandit framework with optimistic reward estimation and KL-regularized objective modeling; we introduce a novel transition-step decomposition technique and employ Rademacher complexity to characterize function class capacity. Our theoretical contributions are threefold: (1) the first non-asymptotic logarithmic regret bound of order $O(eta log(N_R T) cdot d_R)$; (2) extension to multi-step MDPs, unifying contextual bandits and RL under a single analysis and achieving theoretically optimal regret rates; and (3) revelation of the beneficial smoothing effect of KL regularization on the policy optimization landscape—providing the first tight, non-asymptotic theoretical justification for the efficacy of KL penalties in RL from Human Feedback (RLHF).
📝 Abstract
Recent advances in Reinforcement Learning from Human Feedback (RLHF) have shown that KL-regularization plays a pivotal role in improving the efficiency of RL fine-tuning for large language models (LLMs). Despite its empirical advantage, the theoretical difference between KL-regularized RL and standard RL remains largely under-explored. While there is a recent line of work on the theoretical analysis of KL-regularized objective in decision making citep{xiong2024iterative, xie2024exploratory,zhao2024sharp}, these analyses either reduce to the traditional RL setting or rely on strong coverage assumptions. In this paper, we propose an optimism-based KL-regularized online contextual bandit algorithm, and provide a novel analysis of its regret. By carefully leveraging the benign optimization landscape induced by the KL-regularization and the optimistic reward estimation, our algorithm achieves an $mathcal{O}ig(etalog (N_{mathcal R} T)cdot d_{mathcal R}ig)$ logarithmic regret bound, where $eta, N_{mathcal R},T,d_{mathcal R}$ denote the KL-regularization parameter, the cardinality of the reward function class, number of rounds, and the complexity of the reward function class. Furthermore, we extend our algorithm and analysis to reinforcement learning by developing a novel decomposition over transition steps and also obtain a similar logarithmic regret bound.