๐ค AI Summary
In reward-limited reinforcement learning, selecting the most informative samples for human reward annotation with minimal labeling cost is critical to improving policy performance. This paper formally defines the โreward selectionโ problem and proposes a reward-free sample selection framework grounded in state visitation frequency, partial value function estimates, and pre-trained auxiliary evaluation signals. Unlike conventional active learning approaches, our method does not require an initial reward model; instead, it leverages intrinsic policy structure and environment dynamics to guide annotation decisions. Experiments demonstrate that, using annotations for less than 10% of trajectories, the selected reward subset enables agents to converge to near-optimal policies and recover robustly from trajectory deviations. The approach significantly reduces annotation overhead while substantially improving sample efficiency and annotation utility.
๐ Abstract
The ability of reinforcement learning algorithms to learn effective policies is determined by the rewards available during training. However, for practical problems, obtaining large quantities of reward labels is often infeasible due to computational or financial constraints, particularly when relying on human feedback. When reinforcement learning must proceed with limited feedback -- only a fraction of samples get rewards labeled -- a fundamental question arises: which samples should be labeled to maximize policy performance? We formalize this problem of reward selection for reinforcement learning from limited feedback (RLLF), introducing a new problem formulation that facilitates the study of strategies for selecting impactful rewards. Two types of selection strategies are investigated: (i) heuristics that rely on reward-free information such as state visitation and partial value functions, and (ii) strategies pre-trained using auxiliary evaluative feedback. We find that critical subsets of rewards are those that (1) guide the agent along optimal trajectories, and (2) support recovery toward near-optimal behavior after deviations. Effective selection methods yield near-optimal policies with significantly fewer reward labels than full supervision, establishing reward selection as a powerful paradigm for scaling reinforcement learning in feedback-limited settings.