🤖 AI Summary
To address the prohibitively high online interaction cost for policy improvement in offline reinforcement learning, this paper proposes the first active reinforcement learning framework tailored for sequential decision-making. The framework operates under a constrained interaction budget and intelligently selects state-action regions with maximal information gain for trajectory collection, integrating uncertainty estimation with policy-gradient-guided active sampling. It further synergizes offline data with goal-directed online sampling via conservative Q-learning and exploration prioritization driven by model prediction error. Evaluated on continuous-control benchmarks—including MuJoCo, Maze2d, AntMaze, CARLA, and IsaacSimGo1—the method reduces online interaction by 75% on average compared to state-of-the-art baselines, while significantly improving both sample efficiency and final policy performance.
📝 Abstract
Learning agents that excel at sequential decision-making tasks must continuously resolve the problem of exploration and exploitation for optimal learning. However, such interactions with the environment online might be prohibitively expensive and may involve some constraints, such as a limited budget for agent-environment interactions and restricted exploration in certain regions of the state space. Examples include selecting candidates for medical trials and training agents in complex navigation environments. This problem necessitates the study of active reinforcement learning strategies that collect minimal additional experience trajectories by reusing existing offline data previously collected by some unknown behavior policy. In this work, we propose an active reinforcement learning method capable of collecting trajectories that can augment existing offline data. With extensive experimentation, we demonstrate that our proposed method reduces additional online interaction with the environment by up to 75% over competitive baselines across various continuous control environments such as Gym-MuJoCo locomotion environments as well as Maze2d, AntMaze, CARLA and IsaacSimGo1. To the best of our knowledge, this is the first work that addresses the active learning problem in the context of sequential decision-making and reinforcement learning.