Theoretical Modeling of LLM Self-Improvement Training Dynamics Through Solver-Verifier Gap

📅 2025-06-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing self-improvement mechanisms for large language models (LLMs) lack a rigorous theoretical foundation. Method: We propose the “Solver-Verifier Gap” framework—the first analytically tractable theory of LLM self-improvement dynamics—by decoupling model capability into solver and verifier dimensions, modeling their co-evolution as a dynamical system, and predicting final performance gains from early-stage training data. Empirical validation spans multiple models and datasets. Contributions: (1) We identify an intrinsic optimization drive: performance gains arise from the solver’s lag behind the verifier, creating persistent pressure for refinement. (2) We prove that injecting finite external data at any training stage does not alter the asymptotic performance upper bound, providing theoretical justification for flexible data-intervention strategies. (3) We establish the first predictive and interpretable theoretical paradigm for LLM self-improvement—grounded in formal analysis, empirically validated, and amenable to principled design.

Technology Category

Application Category

📝 Abstract
Self-improvement is among the most prominent techniques within the realm of large language models (LLM), aiming to enhance the LLM performance without relying on external data. Despite its significance, generally how LLM performances evolve during the self-improvement process remains underexplored. In this paper, we theoretically model the training dynamics of self-improvement via the concept of solver-verifier gap. This is inspired by the conjecture that the performance enhancement of self-improvement stems from the gap between LLM's solver capability and verifier capability. Based on the theoretical framework, we further introduce how to predict the ultimate power of self-improvement using only information from the first few training epochs. We empirically validate the effectiveness of the theoretical model on various LLMs and datasets. Beyond self-improvement, we extend our analysis to investigate how external data influences these dynamics within the framework. Notably, we find that under limited external data regimes, such external data can be utilized at any stage without significantly affecting final performances, which accords with the empirical observations.
Problem

Research questions and friction points this paper is trying to address.

Modeling LLM self-improvement dynamics via solver-verifier gap
Predicting self-improvement power from early training epochs
Analyzing external data impact on self-improvement dynamics
Innovation

Methods, ideas, or system contributions that make the work stand out.

Model self-improvement via solver-verifier gap
Predict final performance from early epochs
Analyze external data impact on dynamics
🔎 Similar Papers
No similar papers found.
Y
Yifan Sun
School of Statistics and Data Science, Shanghai University of Finance and Economics
Y
Yushan Liang
School of Statistics and Data Science, Shanghai University of Finance and Economics
Z
Zhen Zhang
School of Statistics and Data Science, Shanghai University of Finance and Economics
Jiaye Teng
Jiaye Teng
Tsinghua University
Learning Theory