🤖 AI Summary
This work addresses the alignment failure in online human-in-the-loop reinforcement learning from human feedback (RLHF) under neural parameterization, caused by reward model and policy distributional shifts. We propose the first deep-learning-adapted bilevel RLHF modeling framework. By introducing a weak gradient dominance assumption and integrating bilevel optimization, neural regularization analysis, and first-order hypergradient approximation, we establish the first rigorous global convergence guarantee for RLHF algorithms in the neural parameterization setting. Furthermore, we derive an optimal sample complexity upper bound of $O(varepsilon^{-7/2})$. This work fills a critical theoretical gap in AI alignment for deep learning scenarios and provides the first analytical paradigm for neural RLHF that simultaneously ensures provable alignment and practical feasibility.
📝 Abstract
The importance of Reinforcement Learning from Human Feedback (RLHF) in aligning large language models (LLMs) with human values cannot be overstated. RLHF is a three-stage process that includes supervised fine-tuning (SFT), reward learning, and policy learning. Although there are several offline and online approaches to aligning LLMs, they often suffer from distribution shift issues. These issues arise from the inability to accurately capture the distributional interdependence between the reward learning and policy learning stages. Consequently, this has led to various approximated approaches, but the theoretical insights and motivations remain largely limited to tabular settings, which do not hold in practice. This gap between theoretical insights and practical implementations is critical. It is challenging to address this gap as it requires analyzing the performance of AI alignment algorithms in neural network-parameterized settings. Although bi-level formulations have shown promise in addressing distribution shift issues, they suffer from the hyper-gradient problem, and current approaches lack efficient algorithms to solve this. In this work, we tackle these challenges employing the bi-level formulation laid out in Kwon et al. (2024) along with the assumption emph{Weak Gradient Domination} to demonstrate convergence in an RLHF setup, obtaining a sample complexity of $epsilon^{-frac{7}{2}}$ . Our key contributions are twofold: (i) We propose a bi-level formulation for AI alignment in parameterized settings and introduce a first-order approach to solve this problem. (ii) We analyze the theoretical convergence rates of the proposed algorithm and derive state-of-the-art bounds. To the best of our knowledge, this is the first work to establish convergence rate bounds and global optimality for the RLHF framework in neural network-parameterized settings.