RLVER: Reinforcement Learning with Verifiable Emotion Rewards for Empathetic Agents

📅 2025-07-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
While large language models (LLMs) exhibit strong logical reasoning capabilities, they suffer from severe deficiencies in high-order empathy and other emotional intelligence (EQ) competencies; moreover, reinforcement learning (RL) with verifiable emotional rewards remains unexplored in conversational settings. Method: We propose RLVER—the first end-to-end RL framework leveraging verifiable emotional rewards—where self-consistent empathetic user simulation generates deterministic emotion scores to guide deep empathy acquisition. Using PPO and GRPO, we fine-tune Qwen2.5-7B-Instruct on dialogue rollouts to obtain emotional feedback. Results: On Sentient-Benchmark, performance rises significantly from 13.3 to 79.2, demonstrating substantial gains in empathy and insightfulness, while mathematical reasoning and programming abilities remain stable. This validates RLVER’s effectiveness, reward verifiability, and cross-competency generalizability.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) excel at logical and algorithmic reasoning, yet their emotional intelligence (EQ) still lags far behind their cognitive prowess. While reinforcement learning from verifiable rewards (RLVR) has advanced in other domains, its application to dialogue-especially for emotional intelligence-remains underexplored. In this work, we introduce RLVER, the first end-to-end reinforcement learning framework that leverages verifiable emotion rewards from simulated users to cultivate higher-order empathetic abilities in LLMs. Within this framework, self-consistent affective simulated users engage in dialogue rollouts and produce deterministic emotion scores during conversations, serving as reward signals to guide the LLM's learning. Fine-tuning publicly available Qwen2.5-7B-Instruct model with PPO boosts its Sentient-Benchmark score from 13.3 to 79.2 while largely preserving mathematical and coding competence. Extensive experiments reveal that: (i) RLVER consistently improves multiple dialogue capabilities; (ii) Thinking and non-thinking models show distinct trends--thinking models excel in empathy and insight, while non-thinking models favor action; (iii) GRPO often yields stable gains, while PPO can push certain capabilities to a higher ceiling; (iv) More challenging environments are not always better-moderate ones can yield stronger outcomes. Our results show that RLVER is a practical route toward emotionally intelligent and broadly capable language agents.
Problem

Research questions and friction points this paper is trying to address.

Enhancing emotional intelligence in large language models
Applying verifiable emotion rewards to empathetic dialogue systems
Balancing empathy improvement with cognitive skill preservation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses verifiable emotion rewards for reinforcement learning
Employs simulated users to generate emotion scores
Fine-tunes LLMs with PPO for empathetic abilities
🔎 Similar Papers
No similar papers found.
Peisong Wang
Peisong Wang
CASIA
Deep Neural Network Acceleration and Compression
R
Ruotian Ma
Hunyuan AI Digital Human, Tencent
B
Bang Zhang
Hunyuan AI Digital Human, Tencent
X
Xingyu Chen
Hunyuan AI Digital Human, Tencent
Z
Zhiwei He
Hunyuan AI Digital Human, Tencent
K
Kang Luo
Hunyuan AI Digital Human, Tencent
Qingsong Lv
Qingsong Lv
Tsinghua University
Computer ScienceMachine Learning
Qingxuan Jiang
Qingxuan Jiang
Graduate Student, MIT
Machine LearningOptimization
Z
Zheng Xie
Hunyuan AI Digital Human, Tencent
S
Shanyi Wang
Hunyuan AI Digital Human, Tencent
Y
Yuan Li
Hunyuan AI Digital Human, Tencent
Fanghua Ye
Fanghua Ye
University College London
Conversational AIAI AssistantsGraphNLPLLM
J
Jian Li
Hunyuan AI Digital Human, Tencent
Y
Yifan Yang
Hunyuan AI Digital Human, Tencent
Zhaopeng Tu
Zhaopeng Tu
Tech Lead @ Tencent Digital Human
Digital HumanAgentsLarge Language ModelsMachine Translation
X
Xiaolong Li
Hunyuan AI Digital Human, Tencent