π€ AI Summary
This work addresses the structured optimization challenge in bilevel reinforcement learning, where the upper-level objective depends on the optimal policy of a lower-level Markov decision process (MDP). The authors propose a single-loop first-order Actor-Critic algorithm that circumvents the need for second-order derivatives, strong regularization, or nested sampling inherent in existing approaches. By reformulating the bilevel objective via a penalty function and incorporating decaying entropy regularization into the lower-level policy updates, the method enables an asymptotically unbiased estimation of the upper-level hypergradient. Notably, it efficiently approximates stationary points of the original unregularized bilevel problem without requiring exact solutions to the inner reinforcement learning subproblem. Convergence guarantees are established under finite-time and sample complexity bounds. Empirical validation on GridWorld navigation and human-feedback-driven tweet generation (RLHF) tasks demonstrates the algorithmβs effectiveness and scalability.
π Abstract
We study a structured bi-level optimization problem where the upper-level objective is a smooth function and the lower-level problem is policy optimization in a Markov decision process (MDP). The upper-level decision variable parameterizes the reward of the lower-level MDP, and the upper-level objective depends on the optimal induced policy. Existing methods for bi-level optimization and RL often require second-order information, impose strong regularization at the lower level, or inefficiently use samples through nested-loop procedures. In this work, we propose a single-loop, first-order actor-critic algorithm that optimizes the bi-level objective via a penalty-based reformulation. We introduce into the lower-level RL objective an attenuating entropy regularization, which enables asymptotically unbiased upper-level hyper-gradient estimation without solving the unregularized RL problem exactly. We establish the finite-time and finite-sample convergence of the proposed algorithm to a stationary point of the original, unregularized bi-level optimization problem through a novel lower-level residual analysis under a special type of Polyak-Lojasiewicz condition. We validate the performance of our method through experiments on a GridWorld goal position problem and on happy tweet generation through reinforcement learning from human feedback (RLHF).