RLAR: An Agentic Reward System for Multi-task Reinforcement Learning on Large Language Models

📅 2026-02-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Static, domain-specific reward models are costly to train and exhibit limited generalization under out-of-distribution scenarios. To address this, this work proposes the RLAR framework, which introduces—for the first time—an agent-driven dynamic reward mechanism. Leveraging a large language model (LLM) agent, RLAR retrieves the most suitable reward models from the internet and automatically generates programmatic verifiers, enabling dynamic synthesis and self-evolution of reward functions. By integrating dynamic tool invocation, program generation, and multi-task alignment, the approach achieves performance gains of 10–60 points across mathematical reasoning, code generation, translation, and dialogue tasks. On RewardBench-V2, RLAR substantially outperforms static baselines and approaches the empirical performance ceiling.

Technology Category

Application Category

📝 Abstract
Large language model alignment via reinforcement learning depends critically on reward function quality. However, static, domain-specific reward models are often costly to train and exhibit poor generalization in out-of-distribution scenarios encountered during RL iterations. We present RLAR (Reinforcement Learning from Agent Rewards), an agent-driven framework that dynamically assigns tailored reward functions to individual queries. Specifically, RLAR transforms reward acquisition into a dynamic tool synthesis and invocation task. It leverages LLM agents to autonomously retrieve optimal reward models from the Internet and synthesize programmatic verifiers through code generation. This allows the reward system to self-evolve with the shifting data distributions during training. Experimental results demonstrate that RLAR yields consistent performance gains ranging from 10 to 60 across mathematics, coding, translation, and dialogue tasks. On RewardBench-V2, RLAR significantly outperforms static baselines and approaches the performance upper bound, demonstrating superior generalization through dynamic reward orchestration. The data and code are available on this link: https://github.com/ZhuoerFeng/RLAR.
Problem

Research questions and friction points this paper is trying to address.

reward function
generalization
out-of-distribution
reinforcement learning
large language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

dynamic reward orchestration
agent-driven reward system
programmatic verifier synthesis
multi-task reinforcement learning
LLM-based tool retrieval
🔎 Similar Papers
No similar papers found.