Entropy Regularized Task Representation Learning for Offline Meta-Reinforcement Learning

📅 2024-12-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In offline meta-reinforcement learning, task representations often overfit to the specific data collection policy, severely limiting cross-task generalization. To address this, we propose an entropy-regularized task representation learning framework that— for the first time—maximizes the conditional behavioral policy entropy to approximately minimize mutual information between task representations and policies, thereby decoupling them and mitigating context distribution shift inherent in offline settings. Our method unifies principles from meta-RL, offline RL, and mutual information estimation. Evaluated on MuJoCo benchmarks, it significantly outperforms existing baselines: it yields more robust and faithful task representations both in-distribution and out-of-distribution, and improves downstream adaptation performance.

Technology Category

Application Category

📝 Abstract
Offline meta-reinforcement learning aims to equip agents with the ability to rapidly adapt to new tasks by training on data from a set of different tasks. Context-based approaches utilize a history of state-action-reward transitions -- referred to as the context -- to infer representations of the current task, and then condition the agent, i.e., the policy and value function, on the task representations. Intuitively, the better the task representations capture the underlying tasks, the better the agent can generalize to new tasks. Unfortunately, context-based approaches suffer from distribution mismatch, as the context in the offline data does not match the context at test time, limiting their ability to generalize to the test tasks. This leads to the task representations overfitting to the offline training data. Intuitively, the task representations should be independent of the behavior policy used to collect the offline data. To address this issue, we approximately minimize the mutual information between the distribution over the task representations and behavior policy by maximizing the entropy of behavior policy conditioned on the task representations. We validate our approach in MuJoCo environments, showing that compared to baselines, our task representations more faithfully represent the underlying tasks, leading to outperforming prior methods in both in-distribution and out-of-distribution tasks.
Problem

Research questions and friction points this paper is trying to address.

Offline Meta Reinforcement Learning
Generalization Ability
Task Specialization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Offline Meta Reinforcement Learning
Maximizing Behavioral Uncertainty
Task-Agnostic Background Understanding
🔎 Similar Papers
No similar papers found.