π€ AI Summary
Open-domain tasks pose significant challenges for reinforcement learning from human feedback (RLHF) and reinforcement learning from verifiable rewards (RLVR) due to their subjective nature and lack of objective ground-truth answers, hindering the acquisition of reliable, externally grounded reward signals. To address this, we propose Self-Evaluation Reinforcement Learning (SERL), a novel framework wherein a large language model (LLM) simultaneously serves as both generator and evaluator, enabling internal, self-supervised optimization via a dual reward mechanism: (i) a Copeland pairwise comparison reward derived from aggregated model responses, and (ii) a self-consistency reward. SERL is the first method to employ the same LLM as its own judge, facilitating unsupervised, closed-loop self-improvement without external annotations. On AlpacaEval 2, SERL boosts Qwen3-8Bβs LC win rate from 52.37% to 59.90%, outperforming existing self-improvement approaches and matching the performance of the significantly larger Qwen3-32Bβachieving state-of-the-art results among comparable methods.
π Abstract
Reinforcement Learning (RL) has been shown to improve the capabilities of large language models (LLMs). However, applying RL to open-domain tasks faces two key challenges: (1) the inherent subjectivity of these tasks prevents the verifiable rewards as required by Reinforcement Learning with Verifiable Rewards (RLVR); (2) Reinforcement Learning from Human Feedback (RLHF) relies on external reward mechanisms. To overcome these limitations, we propose Self-Examining Reinforcement Learning (SERL), a novel self-improving framework where the LLM serves as both Actor and Judge. SERL introduces two synergistic reward mechanisms without any external signals. On the one hand, to improve the Actor's capability, we derive rewards from Copeland-style pairwise comparison judgments across a group of generated responses. On the other hand, a self-consistency reward that encourages coherent judgments is proposed to improve the Judge's reliability. This process refines the Judge's capability, which in turn provides a more robust reward for Actor. Experiments show that our method outperforms existing self-improvement training methods. SERL improves the LC win rate of Qwen3-8B on AlpacaEval 2 from 52.37% to 59.90%. To the best of our knowledge, our method achieves state-of-the-art performance among self-improving approaches. Furthermore, it achieves a performance comparable to significantly larger models like Qwen3-32B, demonstrating superior effectiveness and robustness on open-domain tasks.