On the Learning Dynamics of RLVR at the Edge of Competence

📅 2026-02-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates how reinforcement learning from verdict-only rewards (RLVR) can transcend capability limits in long-horizon compositional reasoning tasks. By constructing a theoretical framework for Transformer training dynamics, the study introduces Fourier analysis over finite groups—a novel tool in this context—and proposes that the smoothness of the “difficulty spectrum” governs learning dynamics. It further uncovers a “relay effect” that enables continuous performance gains at the model’s capability frontier. Theoretical predictions are validated through synthetic experiments: smooth difficulty spectra facilitate stable learning, whereas abrupt spectral transitions lead to learning plateaus followed by phase-transition-like breakthroughs. These insights provide a principled foundation for designing efficient curriculum and data mixing strategies in complex reasoning domains.

Technology Category

Application Category

📝 Abstract
Reinforcement learning with verifiable rewards (RLVR) has been a main driver of recent breakthroughs in large reasoning models. Yet it remains a mystery how rewards based solely on final outcomes can help overcome the long-horizon barrier to extended reasoning. To understand this, we develop a theory of the training dynamics of RL for transformers on compositional reasoning tasks. Our theory characterizes how the effectiveness of RLVR is governed by the smoothness of the difficulty spectrum. When data contains abrupt discontinuities in difficulty, learning undergoes grokking-type phase transitions, producing prolonged plateaus before progress recurs. In contrast, a smooth difficulty spectrum leads to a relay effect: persistent gradient signals on easier problems elevate the model's capabilities to the point where harder ones become tractable, resulting in steady and continuous improvement. Our theory explains how RLVR can improve performance at the edge of competence, and suggests that appropriately designed data mixtures can yield scalable gains. As a technical contribution, our analysis develops and adapts tools from Fourier analysis on finite groups to our setting. We validate the predicted mechanisms empirically via synthetic experiments.
Problem

Research questions and friction points this paper is trying to address.

reinforcement learning
verifiable rewards
long-horizon reasoning
learning dynamics
edge of competence
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reinforcement Learning with Verifiable Rewards
Learning Dynamics
Difficulty Spectrum
Relay Effect
Fourier Analysis on Finite Groups
🔎 Similar Papers
No similar papers found.