๐ค AI Summary
This paper addresses the high cost of reward annotation in reinforcement learning by proposing โsegment feedbackโโa novel paradigm wherein each episode is partitioned into $m$ segments, and only binary or cumulative rewards are provided at segment termini, lying between state-action and full-episode feedback. We establish the first theoretical framework for segment feedback, deriving tight regret upper and lower bounds. Key findings: under binary feedback, regret decays exponentially with $m$, substantially improving sample efficiency; under cumulative feedback, regret is nearly independent of $m$, revealing an inherent information bottleneck. Our algorithm operates within the episodic MDP setting and integrates information-theoretic analysis with empirical validation. This work bridges a critical theory-practice gap and yields an interpretable, scalable learning mechanism for sparse-reward environments.
๐ Abstract
Standard reinforcement learning (RL) assumes that an agent can observe a reward for each state-action pair. However, in practical applications, it is often difficult and costly to collect a reward for each state-action pair. While there have been several works considering RL with trajectory feedback, it is unclear if trajectory feedback is inefficient for learning when trajectories are long. In this work, we consider a model named RL with segment feedback, which offers a general paradigm filling the gap between per-state-action feedback and trajectory feedback. In this model, we consider an episodic Markov decision process (MDP), where each episode is divided into $m$ segments, and the agent observes reward feedback only at the end of each segment. Under this model, we study two popular feedback settings: binary feedback and sum feedback, where the agent observes a binary outcome and a reward sum according to the underlying reward function, respectively. To investigate the impact of the number of segments $m$ on learning performance, we design efficient algorithms and establish regret upper and lower bounds for both feedback settings. Our theoretical and experimental results show that: under binary feedback, increasing the number of segments $m$ decreases the regret at an exponential rate; in contrast, surprisingly, under sum feedback, increasing $m$ does not reduce the regret significantly.