A Tractable Inference Perspective of Offline RL

📅 2023-10-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In offline reinforcement learning, highly expressive sequence models struggle to simultaneously achieve high-return action generation and precise conditional/constrained reasoning, primarily because policy and environment stochasticity degrade coarse probability estimates, undermining model advantages. This work introduces Tractable Probabilistic Models (TPMs) into offline RL for the first time, proposing the Trifle framework: it preserves strong sequential modeling capacity while enabling efficient, exact conditional action generation and hard constraint reasoning. Trifle requires only minimal algorithmic modifications yet achieves state-of-the-art performance across nine Gym-MuJoCo benchmarks. It significantly outperforms existing baselines in stochastic environments and safety-critical RL tasks—such as those imposing strict action boundary constraints—effectively bridging the gap between model expressivity and practical decision-making performance.
📝 Abstract
A popular paradigm for offline Reinforcement Learning (RL) tasks is to first fit the offline trajectories to a sequence model, and then prompt the model for actions that lead to high expected return. In addition to obtaining accurate sequence models, this paper highlights that tractability, the ability to exactly and efficiently answer various probabilistic queries, plays an important role in offline RL. Specifically, due to the fundamental stochasticity from the offline data-collection policies and the environment dynamics, highly non-trivial conditional/constrained generation is required to elicit rewarding actions. it is still possible to approximate such queries, we observe that such crude estimates significantly undermine the benefits brought by expressive sequence models. To overcome this problem, this paper proposes Trifle (Tractable Inference for Offline RL), which leverages modern Tractable Probabilistic Models (TPMs) to bridge the gap between good sequence models and high expected returns at evaluation time. Empirically, Trifle achieves the most state-of-the-art scores in 9 Gym-MuJoCo benchmarks against strong baselines. Further, owing to its tractability, Trifle significantly outperforms prior approaches in stochastic environments and safe RL tasks (e.g. with action constraints) with minimum algorithmic modifications.
Problem

Research questions and friction points this paper is trying to address.

Offline RL requires tractable probabilistic query resolution
Efficient conditional generation ensures high expected returns
Trifle bridges the sequence model and evaluation gap
Innovation

Methods, ideas, or system contributions that make the work stand out.

Tractable Probabilistic Models usage
Efficient probabilistic queries handling
Stochastic environment performance enhancement
🔎 Similar Papers
No similar papers found.