Reward Models in Deep Reinforcement Learning: A Survey

📅 2025-06-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the misalignment between reward models and true objectives in deep reinforcement learning, as well as the resulting limitations in policy optimization. To this end, it introduces— for the first time—a unified taxonomy that systematically organizes reward modeling across three orthogonal dimensions: modeling source (explicit vs. implicit), mechanism design (supervised vs. interactive), and learning paradigm (static vs. dynamic). The survey comprehensively covers mainstream approaches—including inverse reinforcement learning, preference learning, language-model-based feedback, human demonstration distillation, contrastive learning, and online interactive modeling—and critically analyzes evaluation methodologies and practical deployment challenges. This work fills a critical gap in the literature by providing the first systematic, cross-cutting review of reward modeling. It clarifies the technical evolution of the field and identifies four key research frontiers: scalability, generalization, robustness, and human-AI alignment.

Technology Category

Application Category

📝 Abstract
In reinforcement learning (RL), agents continually interact with the environment and use the feedback to refine their behavior. To guide policy optimization, reward models are introduced as proxies of the desired objectives, such that when the agent maximizes the accumulated reward, it also fulfills the task designer's intentions. Recently, significant attention from both academic and industrial researchers has focused on developing reward models that not only align closely with the true objectives but also facilitate policy optimization. In this survey, we provide a comprehensive review of reward modeling techniques within the deep RL literature. We begin by outlining the background and preliminaries in reward modeling. Next, we present an overview of recent reward modeling approaches, categorizing them based on the source, the mechanism, and the learning paradigm. Building on this understanding, we discuss various applications of these reward modeling techniques and review methods for evaluating reward models. Finally, we conclude by highlighting promising research directions in reward modeling. Altogether, this survey includes both established and emerging methods, filling the vacancy of a systematic review of reward models in current literature.
Problem

Research questions and friction points this paper is trying to address.

Review reward modeling techniques in deep RL literature
Categorize reward models by source, mechanism, and learning paradigm
Evaluate methods and highlight future research directions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Survey of reward modeling in deep RL
Categorize approaches by source and mechanism
Review evaluation methods for reward models
🔎 Similar Papers
No similar papers found.