REvolve: Reward Evolution with Large Language Models using Human Feedback

📅 2024-06-03
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
For implicit-standard tasks—such as autonomous driving, humanoid locomotion, and dexterous manipulation—where “desirable behavior” is difficult to formalize and reward design suffers from subjectivity, this paper proposes REvolve: the first end-to-end reward evolution framework. REvolve integrates large language model (LLM)-driven reasoning, structured human feedback modeling, evolutionary algorithms, and deep reinforcement learning to automatically transform qualitative judgments into quantitative reward functions. Through iterative closed-loop cycles—comprising reward generation, pairwise ranking, mutation, and policy evaluation—it dynamically incorporates expert implicit knowledge into reward optimization. Evaluated on three canonical tasks, REvolve-generated reward functions yield policies with significantly improved performance, consistently outperforming state-of-the-art baselines including rule-based, inverse reinforcement learning, and zero-shot LLM-derived rewards.

Technology Category

Application Category

📝 Abstract
Designing effective reward functions is crucial to training reinforcement learning (RL) algorithms. However, this design is non-trivial, even for domain experts, due to the subjective nature of certain tasks that are hard to quantify explicitly. In recent works, large language models (LLMs) have been used for reward generation from natural language task descriptions, leveraging their extensive instruction tuning and commonsense understanding of human behavior. In this work, we hypothesize that LLMs, guided by human feedback, can be used to formulate reward functions that reflect human implicit knowledge. We study this in three challenging settings -- autonomous driving, humanoid locomotion, and dexterous manipulation -- wherein notions of ``good"behavior are tacit and hard to quantify. To this end, we introduce REvolve, a truly evolutionary framework that uses LLMs for reward design in RL. REvolve generates and refines reward functions by utilizing human feedback to guide the evolution process, effectively translating implicit human knowledge into explicit reward functions for training (deep) RL agents. Experimentally, we demonstrate that agents trained on REvolve-designed rewards outperform other state-of-the-art baselines.
Problem

Research questions and friction points this paper is trying to address.

Designing effective reward functions for RL is challenging due to subjective tasks
LLMs can generate rewards using human feedback to reflect implicit knowledge
REvolve evolves rewards via human feedback to improve RL agent performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLMs generate rewards from human feedback
Evolutionary framework refines reward functions
Translates implicit knowledge into explicit rewards
🔎 Similar Papers
No similar papers found.
R
Rishi Hazra
Centre for Applied Autonomous Sensor Systems (AASS), Örebro University, Sweden
Alkis Sygkounas
Alkis Sygkounas
Doctoral Student
Computer science
A
A. Persson
Centre for Applied Autonomous Sensor Systems (AASS), Örebro University, Sweden
Amy Loutfi
Amy Loutfi
Professor Computer Science, Örebro University and Linköping University
artificial intelligenceroboticshuman robot interaction
Pedro Zuidberg Dos Martires
Pedro Zuidberg Dos Martires
Örebro University