🤖 AI Summary
This paper addresses the misalignment between reward models and true objectives in deep reinforcement learning, as well as the resulting limitations in policy optimization. To this end, it introduces— for the first time—a unified taxonomy that systematically organizes reward modeling across three orthogonal dimensions: modeling source (explicit vs. implicit), mechanism design (supervised vs. interactive), and learning paradigm (static vs. dynamic). The survey comprehensively covers mainstream approaches—including inverse reinforcement learning, preference learning, language-model-based feedback, human demonstration distillation, contrastive learning, and online interactive modeling—and critically analyzes evaluation methodologies and practical deployment challenges. This work fills a critical gap in the literature by providing the first systematic, cross-cutting review of reward modeling. It clarifies the technical evolution of the field and identifies four key research frontiers: scalability, generalization, robustness, and human-AI alignment.
📝 Abstract
In reinforcement learning (RL), agents continually interact with the environment and use the feedback to refine their behavior. To guide policy optimization, reward models are introduced as proxies of the desired objectives, such that when the agent maximizes the accumulated reward, it also fulfills the task designer's intentions. Recently, significant attention from both academic and industrial researchers has focused on developing reward models that not only align closely with the true objectives but also facilitate policy optimization. In this survey, we provide a comprehensive review of reward modeling techniques within the deep RL literature. We begin by outlining the background and preliminaries in reward modeling. Next, we present an overview of recent reward modeling approaches, categorizing them based on the source, the mechanism, and the learning paradigm. Building on this understanding, we discuss various applications of these reward modeling techniques and review methods for evaluating reward models. Finally, we conclude by highlighting promising research directions in reward modeling. Altogether, this survey includes both established and emerging methods, filling the vacancy of a systematic review of reward models in current literature.