AlignIQL: Policy Alignment in Implicit Q-Learning through Constrained Optimization

📅 2024-05-28
🏛️ arXiv.org
📈 Citations: 11
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of explicitly recovering implicit policies in Implicit Q-Learning (IQL). We formulate implicit policy extraction as a constrained optimization problem—the first such treatment—and propose AlignIQL and AlignIQL-hard, enabling end-to-end policy alignment without decoupling value and policy networks. Theoretically, we establish the validity of weighted regression for policy extraction in IQL, unifying its computational simplicity with the policy interpretability of Implicit Deterministic Q-Learning (IDQL). Methodologically, our approach integrates implicit Q-function modeling, constrained optimization, and weighted regression. Evaluated on the D4RL benchmark, it achieves state-of-the-art performance, particularly excelling in sparse-reward tasks—including Antmaze and Adroit—where it significantly outperforms both IQL and IDQL.

Technology Category

Application Category

📝 Abstract
Implicit Q-learning (IQL) serves as a strong baseline for offline RL, which learns the value function using only dataset actions through quantile regression. However, it is unclear how to recover the implicit policy from the learned implicit Q-function and why IQL can utilize weighted regression for policy extraction. IDQL reinterprets IQL as an actor-critic method and gets weights of implicit policy, however, this weight only holds for the optimal value function. In this work, we introduce a different way to solve the implicit policy-finding problem (IPF) by formulating this problem as an optimization problem. Based on this optimization problem, we further propose two practical algorithms AlignIQL and AlignIQL-hard, which inherit the advantages of decoupling actor from critic in IQL and provide insights into why IQL can use weighted regression for policy extraction. Compared with IQL and IDQL, we find our method keeps the simplicity of IQL and solves the implicit policy-finding problem. Experimental results on D4RL datasets show that our method achieves competitive or superior results compared with other SOTA offline RL methods. Especially in complex sparse reward tasks like Antmaze and Adroit, our method outperforms IQL and IDQL by a significant margin.
Problem

Research questions and friction points this paper is trying to address.

Recovering implicit policy from learned Q-function in offline RL
Explaining why weighted regression works for policy extraction
Solving implicit policy-finding via constrained optimization formulation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Formulates implicit policy-finding as optimization problem
Proposes AlignIQL algorithms with decoupled actor-critic
Solves policy extraction using weighted regression insights
🔎 Similar Papers
No similar papers found.
L
Longxiang He
Center for Artificial Intelligence and Robotics, Tsinghua University
L
Li Shen
Sun Yat-Sen University
J
Junbo Tan
Xueqian Wang
Xueqian Wang
Tsinghua University
Information FusionTarget DetectionRadar ImagingImage Processing