RF-Agent: Automated Reward Function Design via Language Agent Tree Search

📅 2026-02-27
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenges of inefficient reward function design in low-level control tasks, where historical feedback is underutilized and search efficiency is limited. It proposes a novel paradigm by modeling reward function design as a sequential decision-making problem, leveraging large language models as linguistic agents and integrating Monte Carlo Tree Search (MCTS) to optimize the generation process. This approach significantly enhances contextual reasoning capabilities and search efficiency. Evaluated across 17 diverse low-level control tasks, the method demonstrates strong performance, validating its effectiveness and generalization ability. The study thus establishes a new framework for automated reward function design in reinforcement learning settings.

Technology Category

Application Category

📝 Abstract
Designing efficient reward functions for low-level control tasks is a challenging problem. Recent research aims to reduce reliance on expert experience by using Large Language Models (LLMs) with task information to generate dense reward functions. These methods typically rely on training results as feedback, iteratively generating new reward functions with greedy or evolutionary algorithms. However, they suffer from poor utilization of historical feedback and inefficient search, resulting in limited improvements in complex control tasks. To address this challenge, we propose RF-Agent, a framework that treats LLMs as language agents and frames reward function design as a sequential decision-making process, enhancing optimization through better contextual reasoning. RF-Agent integrates Monte Carlo Tree Search (MCTS) to manage the reward design and optimization process, leveraging the multi-stage contextual reasoning ability of LLMs. This approach better utilizes historical information and improves search efficiency to identify promising reward functions. Outstanding experimental results in 17 diverse low-level control tasks demonstrate the effectiveness of our method. The source code is available at https://github.com/deng-ai-lab/RF-Agent.
Problem

Research questions and friction points this paper is trying to address.

reward function design
low-level control tasks
historical feedback utilization
search efficiency
complex control tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reward Function Design
Language Agent
Monte Carlo Tree Search
Large Language Models
Sequential Decision-Making
🔎 Similar Papers
No similar papers found.
N
Ning Gao
Beihang University
X
Xiuhui Zhang
Beihang University
Xingyu Jiang
Xingyu Jiang
Huazhong University of Science and Technology
Computer VisionMultimodal Learning3D Vision
M
Mukang You
Beihang University
Mohan Zhang
Mohan Zhang
Ph.D., CS@UNC Chapel Hill;
LLM
Y
Yue Deng
Beihang University