🤖 AI Summary
While large language models (LLMs) exhibit strong logical reasoning capabilities, they suffer from severe deficiencies in high-order empathy and other emotional intelligence (EQ) competencies; moreover, reinforcement learning (RL) with verifiable emotional rewards remains unexplored in conversational settings. Method: We propose RLVER—the first end-to-end RL framework leveraging verifiable emotional rewards—where self-consistent empathetic user simulation generates deterministic emotion scores to guide deep empathy acquisition. Using PPO and GRPO, we fine-tune Qwen2.5-7B-Instruct on dialogue rollouts to obtain emotional feedback. Results: On Sentient-Benchmark, performance rises significantly from 13.3 to 79.2, demonstrating substantial gains in empathy and insightfulness, while mathematical reasoning and programming abilities remain stable. This validates RLVER’s effectiveness, reward verifiability, and cross-competency generalizability.
📝 Abstract
Large language models (LLMs) excel at logical and algorithmic reasoning, yet their emotional intelligence (EQ) still lags far behind their cognitive prowess. While reinforcement learning from verifiable rewards (RLVR) has advanced in other domains, its application to dialogue-especially for emotional intelligence-remains underexplored. In this work, we introduce RLVER, the first end-to-end reinforcement learning framework that leverages verifiable emotion rewards from simulated users to cultivate higher-order empathetic abilities in LLMs. Within this framework, self-consistent affective simulated users engage in dialogue rollouts and produce deterministic emotion scores during conversations, serving as reward signals to guide the LLM's learning. Fine-tuning publicly available Qwen2.5-7B-Instruct model with PPO boosts its Sentient-Benchmark score from 13.3 to 79.2 while largely preserving mathematical and coding competence. Extensive experiments reveal that: (i) RLVER consistently improves multiple dialogue capabilities; (ii) Thinking and non-thinking models show distinct trends--thinking models excel in empathy and insight, while non-thinking models favor action; (iii) GRPO often yields stable gains, while PPO can push certain capabilities to a higher ceiling; (iv) More challenging environments are not always better-moderate ones can yield stronger outcomes. Our results show that RLVER is a practical route toward emotionally intelligent and broadly capable language agents.